網頁

2020年12月14日 星期一

Record CSI-Camera to h264 video


gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=1920, height=1080,format=NV12, framerate=30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvv4l2h264enc preset-level=4 MeasureEncoderLatency=1 ! 'video/x-h264, stream-format=(string)byte-stream, alignment=(string)au' ! filesink location=file.h264 -e 


gst-launch-1.0 filesrc location=file.h264 ! h264parse ! nvv4l2decoder ! nvvidconv ! nveglglessink -e


2020年12月9日 星期三

TLT+DeepStream on ICDAR-2013

到 ICDAR-2013 的網站下載圖片和標註
ICDAR-2013 在 Robust Reading Competition 網站
要下載前要註冊
選 Challenges/Focused Scene Text
2013 的文字比較正,標註只有兩點
2015 的文字有各種角度,標註有四點

產生 kitti 資料時,直接使用訓練效果不好
轉換圖形時採用不同策略,一圖拆成多圖
截圖時不要破壞文字

TLT 有測試過 detectnet_v2 (resnet18, resnet50, darknet19), object_detection yolov3 (darknet19)
detectnet_v2 之 resnet18 precision: 14.6133 %
雖然 precision 不高,但是 tlt-infer 可以正確抓到文字
detectnet_v2 之 darknet19 precision: 18.7146 %
但 detectnet_v2 之 darknet19 失敗,tlt-infer 可以無法正確抓到文字
yolov3 之 darknet19 AP: 0.769
效果比 detectnet_v2 好

最看重 TLT 的功能是 tlt-prune, 神經網路被裁剪後
節省記憶體,加快速度
unprune Mem:2.5G   PREF:8.33
0.5        Mem:2.1G    PREF:28.83
0.7        Mem:2.1G    PREF:30.00

DeepStream 之 nvinfer 的 config-file-path 設定檔
tlt-encoded-model=yolo_darknet19_pruned_70.etlt
model-engine-file=yolo_darknet19_pruned_70.engine
第一次啟動會很慢,因為在產生 yolo_darknet19_pruned_70.etlt_b1_gpu0_fp16.engine 檔
$ mv yolo_darknet19_pruned_70.etlt_b1_gpu0_fp16.engine yolo_darknet19_pruned_70.engine
之後會直接使用 yolo_darknet19_pruned_70.engine

以下是測試歷程記錄

2020年12月4日 星期五

Build TensorRT OSS on Jetson


更新 CMake
$ wget http://www.cmake.org/files/v3.13/cmake-3.13.0.tar.gz
$ tar xpvf cmake-3.13.0.tar.gz cmake-3.13.0
$ cd cmake-3.13.0
$ ./bootstrap --system-curl
$ make -j4
$ sudo ln -s /home/nano/Data/cmake-3.13.0/cmake /usr/local/bin/cmake

$ git clone -b release/7.1 https://github.com/nvidia/TensorRT
$ cd TensorRT/
$ git submodule update --init --recursive
$ export TRT_SOURCE=`pwd`
$ cd $TRT_SOURCE
$ mkdir -p build && cd build

$ /usr/local/bin/cmake .. -DGPU_ARCHS=53 \
-DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ \
-DCMAKE_C_COMPILER=/usr/bin/gcc \
-DCMAKE_CUDA_COMPIILER=/usr/local/cuda-10.2/bin/nvcc \
-DTRT_BIN_DIR=`pwd`/out

$ make nvinfer_plugin -j$(nproc)$ mkdir backup
$ sudo mv /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.3 backup
$ sudo cp libnvinfer_plugin.so.7.1.3 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.3
$ sudo ldconfig

由此下載 tlt-converter
$ mkdir tlt-converter
$ mv tlt_7.1.zip tlt-converter
$ cd tlt-converter
$ unzip tlt_7.1.zip
$ vi ~/.bashrc 加入
export TRT_LIB_PATH=”/usr/lib/aarch64-linux-gnu”
export TRT_INC_PATH=”/usr/include/aarch64-linux-gnu”

2020年11月27日 星期五

jetson nano install tensorflow

$ sudo apt-get update
$ sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran
$ sudo apt-get install python3-pip
$ sudo pip3 install -U pip testresources setuptools==49.6.0
$ sudo apt-get install virtualenv
$ make envs
$ cd envs
$ sudo pip3 install -U numpy==1.16.1 future==0.18.2 mock==3.0.5 h5py==2.10.0 keras_preprocessing==1.1.1 keras_applications==1.0.8 gast==0.2.2 futures protobuf pybind11

$ python3 -m virtualenv -p python3 tensorflow-2.3.1
$ source tensorflow-2.3.1/bin/activate
$ pip3 install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 tensorflow==2.3.1+nv20.11

$ python3 -m virtualenv -p python3 tensorflow-1.15.4
$ source tensorflow-1.15.4/bin/activate
$ pip3 install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 tensorflow==1.15.4+nv20.11

xpra 常用命令

啟動 xterm, select to copy, shift+insert or middle-click to paste
開啟選項 ctrl+middle-click
設定 buffer size -sl
顯示 scrollbar -sb
黑筆白字 -fg white -bg black
>Xpra.exe start ssh://user@ip/session --exit-with-children=yes --start-child="xterm -fa Monospace -fs 16 -xrm 'XTerm*selectToClipboard: true' -sl 1000 -sb -fg white -bg black"

>Xpra.exe start ssh://user@ip/session --exit-with-children=yes --start-child=/opt/google/chrome/chrome
>Xpra.exe start ssh://user@ip/session --exit-with-children=yes --start-child=/usr/bin//chromium-browser
>Xpra.exe stop ssh://user@ip/session
>Xpra.exe attach ssh://user@ip/session
>Xpra.exe detach ssh://user@ip/session

連上目前的桌面,如 vnc
>Xpra.exe shadow ssh://user@ip/1
>Xpra.exe shadow ssh://user@ip

$ xpra list
$ xpra start :session
$ DISPLAY=:session gedit

2020年11月26日 星期四

xpra in jsetson nano


以下部分步驟,直接跳過避不開的錯誤
$ sudo apt-get install libx11-dev libxtst-dev libxcomposite-dev libxdamage-dev \
    libxkbfile-dev python-all-dev
$ sudo apt-get install libgtk-3-dev python3-dev python3-cairo-dev python-gi-dev cython3
$ sudo apt-get install xauth x11-xkb-utils
$ sudo apt-get install libx264-dev libvpx-dev yasm
$ sudo apt-get install libavformat-dev libavcodec-dev libswscale-dev
$ sudo apt-get install libturbojpeg-dev
$ sudo apt-get install libwebp-dev
$ sudo apt-get install uglifyjs brotli libjs-jquery libjs-jquery-ui gnome-backgrounds
$ sudo apt-get install python3-opengl python3-numpy python3-pil
$ sudo apt-get install python3-rencode python3-lz4 python3-dbus python3-cryptography \
    python3-netifaces python3-yaml
$ sudo apt-get install python3-setproctitle python3-xdg python3-pyinotify python3-opencv
$ sudo apt-get install libpam-dev quilt xserver-xorg-dev xutils-dev \
    xserver-xorg-video-dummy xvfb keyboard-configuration
$ sudo apt-get install python-kerberos python-gssapi
$ sudo apt-get install python-avahi
$ sudo apt-get install gstreamer1.0-pulseaudio gstreamer1.0-alsa \
    gstreamer1.0-plugins-base gstreamer1.0-plugins-good \
    gstreamer1.0-plugins-ugly python-gst-1.0
$ sudo apt-get install cups-filters cups-common cups-pdf python3-cups
$ sudo apt-get install openssh-client sshpass python3-paramiko
$ sudo apt-get install dh-systemd
$ wget https://www.xpra.org/src/xpra-4.0.4.tar.xz
$ tar -xf xpra-4.0.4.tar.xz
$ cd xpra-4.0.4
$ sudo ./setup.py install
$ vi setup.py
將第一行改成 python3
$ sudo ./setup.py install
$ sudo mkdir /usr/local/share/xpra/www/js/lib
$ sudo ./setup.py install

2020年11月25日 星期三

LXDE 桌面 lightdm, VNC Server, 中文輸入

$ sudo dpkg-reconfigure lightdm
$ sudo apt update
為避免 lightdm 黑畫面,無法登入,需要鎖住 systemd 不要升級
$ sudo apt-mark hold systemd
若有一天問題解決,解開封鎖
$ sudo apt-mark unhold systemd
$ sudo apt upgrade
$ sudo apt install x11vnc
$ sudo x11vnc -storepasswd
$ sudo vi /etc/systemd/system/x11vnc.service
# Description: Custom Service Unit file
# File: /etc/systemd/system/x11vnc.service
[Unit]
Description="x11vnc"
Requires=display-manager.service
After=display-manager.service

[Service]
ExecStart=/usr/bin/x11vnc -loop -nopw -xkb -repeat -noxrecord -noxfixes -noxdamage -forever -rfbport 5900 -display :0 -auth guess -rfbauth /home/nano/.vnc/passwd
ExecStop=/usr/bin/killall x11vnc
Restart=on-failure
RestartSec=2

[Install]
WantedBy=multi-user.target


$ sudo systemctl enable x11vnc.service
$ sudo systemctl start x11vnc.service

$ xrandr -q
$ sudo vi /etc/X11/xorg.conf
Section "Monitor"
    Identifier "DSI-0"
    Option "Ignore"
EndSection

Section "Screen"
    Identifier "Default Screen"
    Monitor "Configured Monitor"
    Device "Default Device"
    SubSection "Display"
        Depth 24
        Virtual 1280 800
    EndSubSection
EndSection

$ sudo apt-get install fcitx fcitx-chewing
Start up/Preferences/Language Support
Keyboard input method system: fcitx
Reboot
Start up/Preferences/Fcitx Configuration
按 + 並取消打勾 Only Show Current Language
加入 Chewing




2020年11月19日 星期四

Nvidia TLT 分析

DetectPostprocessor::parseBoundingBox
networkInfo 640x368

outputCoverageLayerIndex
outputCoverageBuffer [4 40 23] 4 個 class 的 confidence

outputBBoxLayerIndex
outputBboxBuffer [16 40 23] numClasses*4, x1, y1, x2, y2

targetShape [40 23]
gridSize 40*23
strideX 16
strideY 16
gcCenters0 [40] (0.5 16.5 32.5...624.5)/35
gcCenters1 [23] (0.5 16.5 32.5...352.5)/35
numClasses 4

ClassifyPostprocessor::parseAttributesFromSoftmaxLayers
m_OutputLayerInfo[1]
m_OutputLayerInfo[1].inferDims[12 1 1] 12 個 class 的 probability
numClasses=12


參考 BBox Ground Truth Generator
cov: Batch_size, Num_classes, image_height/16, image_width/16
bbox: Batch_size, Num_classes * 4, image_height/16, image_width/16 (where 4 is the number of coordinates per cell)


2020年11月18日 星期三

Jupyter Notebook in virtualenv

$ source ~/envs/tensorflow-2.0/bin/activate
(tensorflow-2.0) $ pip3 install ipykernel
(tensorflow-2.0) $ ipython kernel install --user --name=tensorflow-2.0
(tensorflow-2.0) $ jupyter notebook

進入網頁
1. 在 New 一個 Notebook 時選擇 tensorflow-2.0
2. 開啟 ipynb 改變 virtualenv, 由 menu/Kernel/Change kernel/tensorflow-2.0



2020年11月11日 星期三

Deepstream nvinfer classification 分析

nvinfer 用於 classification 主要的參數
# 1: 用於物件偵測, 2:用於物件分類
process-mode=2
# 相隔多少 frames 之後要再推導一遍
secondary-reinfer-interval=0
# 同步 or 非同步模式
classifier-async-mode=1

nvinfer 有三個 threads
gst_nvinfer_submit_input_buffer
 主要 thread, 接收上一級傳來的 inbuf, 處理 resize, crop
 用 push input_queue 驅動 gst_nvinfer_input_queue_loop
gst_nvinfer_input_queue_loop
 處理 color format 轉換
 用 push process_queue 驅動 gst_nvinfer_generate_output
gst_nvinfer_generate_output
 用神經網路推導出 label

同步模式
gst_nvinfer_submit_input_buffer
  gst_nvinfer_process_objects (nvinfer, inbuf, in_surf);
    查看 object_id 是否已經在 object_history_map 中
    should_infer_object() 依據形狀大小(變化),多久前推導過,判斷是否要推導
    不須推導放入 batch->objs_pending_meta_attach
    若 object_id 不在 object_history_map, 建立它
    若須推導建立 frame, 放入 batch->frames, 連結 object_history_map 和 frame.history
    將來可透過此取得 label
    須推導, 放入 batch->frames
    若收集到夠多的 frames
    convert_batch_and_push_to_input_thread()
      處理 resize, crop
      g_queue_push_tail (nvinfer->input_queue, batch);
  最後 push_buffer, g_queue_push_tail (nvinfer->input_queue, buf_push_batch);
gst_nvinfer_input_queue_loop
 處理 color format 轉換
 用 push process_queue 驅動 gst_nvinfer_generate_output
gst_nvinfer_generate_output
  if push_buffer gst_pad_push() 傳送 inbuf 往下一級
  推導出 label, 透過 frame.history 設定 object_history_map
  attach_metadata_segmentation()
  將之前 batch->objs_pending_meta_attach, attach_metadata_segmentation()

非同步模式
gst_nvinfer_submit_input_buffer
  gst_nvinfer_process_objects (nvinfer, inbuf, in_surf);
    查看 object_id 是否已經在 object_history_map 中
    should_infer_object() 依據形狀大小(變化),多久前推導過,判斷是否要推導
    若已經有 label, attach_metadata_classifier()
    若 object_id 不在 object_history_map, 建立它
    若須推導建立 frame, 放入 batch->frames, 連結 object_history_map 和 frame.history
    將來可透過此取得 label
    若收集到夠多的 frames
    convert_batch_and_push_to_input_thread()
      處理 resize, crop
      g_queue_push_tail (nvinfer->input_queue, batch);
  gst_pad_push() 傳送 inbuf 往下一級
gst_nvinfer_input_queue_loop
 處理 color format 轉換
 用 push process_queue 驅動 gst_nvinfer_generate_output
gst_nvinfer_generate_output
  推導出 label, 透過 frame.history 設定 object_history_map

CPU GPU(CUDA) 異步並行分析
gst_nvinfer_submit_input_buffer
  cudaMemset2DAsync(convertStream)
gst_nvinfer_input_queue_loop
  cudaStreamWaitEvent(m_PreProcessStream, m_InputConsumedEvent)
  convertFcn(m_PreProcessStream)
  cudaStreamAddCallback(m_PreProcessStream)
  cudaEventRecord(m_PreProcessCompleteEvent, m_PreProcessStream)
  cudaStreamWaitEvent(m_InferStream, m_PreProcessCompleteEvent)
  enqueue(m_InferStream, m_InputConsumedEvent)
  cudaEventRecord(m_InferCompleteEvent, m_InferStream)
  cudaStreamWaitEvent(m_PostprocessStream, m_InferCompleteEvent)
  cudaMemcpyAsync(m_PostprocessStream)
  cudaEventRecord(m_OutputCopyDoneEvent, m_PostprocessStream)
gst_nvinfer_generate_output
  cudaEventSynchronize(m_OutputCopyDoneEvent)

2020年11月2日 星期一

DeepStream 之 gst-dsexample 研究


這個範例分成兩個版本
gstdsexample.cpp 和 gstdsexample_optimized.cpp
版本的區分要在 Makefile 的 USE_OPTIMIZED_DSEXAMPLE 設定

要存檔測試時,需打開下列開關,執行時加上 sudo
//#define DSEXAMPLE_DEBUG

在 Jetson 機器上執行時,要在 nvdsosd 和 nveglglessink 中間加上 nvegltransform
在 Tesla 機器上 nvvideoconvert 要加上 nvbuf-memory-type=nvbuf-mem-cuda-unified
gstdsexample.cpp 有 full-frame 和 blur-objects 參數
gstdsexample_optimized.cpp 有 full-frame 和 batch-size 參數
範例如下
$ gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4 \
! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! \
nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary.txt ! \
nvvideoconvert nvbuf-memory-type=nvbuf-mem-cuda-unified ! 'video/x-raw(memory:NVMM), format=RGBA' ! \
dsexample full-frame=0 blur-objects=1 ! nvdsosd ! nveglglessink


gstdsexample.cpp 說明
gst_dsexample_transform_ip()



gstdsexample_optimized.cpp 說明
gst_dsexample_start()
建立兩組 queue
  dsexample->process_queue = g_queue_new ();
  dsexample->cvmat_queue = g_queue_new ();
並預先放入兩組 cvmat array 到 cvmat_queue 
  for (int i = 0; i < 2; i++) {
    cvmat = new cv::Mat[dsexample->max_batch_size];
    g_queue_push_tail (dsexample->cvmat_queue, cvmat);
  }
啟動 gst_dsexample_output_loop thread
  dsexample->process_thread =
      g_thread_new ("dsexample-process-thread", gst_dsexample_output_loop,
      dsexample);

gst_dsexample_submit_input_buffer()
啟動於每次資料由 sink 進入
一次的資料會包含多個 frame, 一的 frame 有多個 object
透過 scale_and_fill_data() 將資料放入 dsexample->batch_insurf
當 dsexample->batch_insurf 滿了(或是最後一次未滿) 透過
convert_batch_and_push_to_process_thread() 轉換到 dsexample->inter_buf
  batch->cvmat = (cv::Mat *) g_queue_pop_head (dsexample->cvmat_queue);
  由 dsexample->inter_buf 產生 batch->cvmat
  g_queue_push_tail (dsexample->process_queue, batch);
要離開 gst_dsexample_submit_input_buffer() 時,
多 push 一次輸出訊號(batch->push_buffer = TRUE)
    g_queue_push_tail (dsexample->process_queue, buf_push_batch);

gst_dsexample_output_loop()
輸出的 thread, 一直 loop, 直到結束
  batch = g_queue_pop_head (dsexample->process_queue));
  if (batch->push_buffer) {
    gst_pad_push (GST_BASE_TRANSFORM_SRC_PAD (dsexample), batch->inbuf);
  }
  g_queue_push_tail (dsexample->cvmat_queue, batch->cvmat);



2020年10月28日 星期三

在 DeepStream 上使用 Yolo v4

參考 Using YOLOv4 on NVIDIA DeepStream 5.0
darknet 下載 yolov4.cfg, yolov4.weights, yolov4-tiny.cfg, yolov4-tiny.weights
可修改 cfg 檔的 width, height

$ pip3 install torch
$ pip3 install torchvision
$ git clone https://github.com/Tianxiaomo/pytorch-YOLOv4.git
$ cd pytorch-YOLOv4
$ pip3 install onnxruntime
$ python3 demo_darknet2onnx.py yolov4.cfg yolov4.weights ./data/giraffe.jpg 1
產生 yolov4_1_3_416_416_static.onnx
/usr/src/tensorrt/bin/trtexec --onnx=yolov4_1_3_416_416_static.onnx \
--explicitBatch --saveEngine=yolov4_1_3_416_416_fp16.engine \
--workspace=4096 --fp16
產生 yolov4_1_3_416_416_fp16.engine
$ cd ..
$ git clone https://github.com/NVIDIA-AI-IOT/yolov4_deepstream
$ cd yolov4_deepstream
$ sudo cp -r deepstream_yolov4 /opt/nvidia/deepstream/deepstream-5.0/sources
$ cd /opt/nvidia/deepstream/deepstream-5.0/sources
$ sudo chown user.group deepstream_yolov4
$ cd deepstream_yolov4/nvdsinfer_custom_impl_Yolo/
$ make
$ cd ..
拷貝 yolov4_1_3_416_416_fp16.engine 到此
修改 config_infer_primary_yoloV4.txt
model-engine-file=yolov4-tiny_1_3_416_416_fp16.engine

修改 deepstream_app_config_yoloV4.txt
# 不存檔
[sink0]
enable=0
# 螢幕顯示
[sink1]
enable=1
type=2
sync=0
display-id=0
offset-x=0
offset-y=0
width=0
height=0
overlay-id=1
source-id=0
# 不要在此使用 model-engine-file
[primary-gie]
enable=1
#model-engine-file=yolov4_1_3_320_320_fp16.engine
# 開啟 tracker
[tracker]
enable=1

執行
$ deepstream-app -c deepstream_app_config_yoloV4.txt 




安裝 TensorRT

參考 NVIDIA TensorRT Documention

到網址 https://developer.nvidia.com/tensorrt 按下 Download
輸入帳號密碼
選擇 TensorRT 7.2.1
選擇 TensorRT 7.2.1 for Linux and CUDA 10.2
選擇 TensorRT 7.2.1 for Ubuntu 18.04 and CUDA 10.2 DEB local repo package

$ os="ubuntu1804"
$ tag="cuda10.2-trt7.2.1.6-ga-20201006"
$ sudo dpkg -i nv-tensorrt-repo-${os}-${tag}_1-1_amd64.deb
$ sudo apt-key add /var/nv-tensorrt-repo-${tag}/7fa2af80.pub
$ sudo apt-get update
$ sudo apt-get install tensorrt
$ sudo apt-get install python-libnvinfer-dev
$ sudo apt-get install python3-libnvinfer-dev
$ sudo apt-get install uff-converter-tf
$ sudo apt-get install onnx-graphsurgeon
查詢是否安裝成功
$ dpkg -l | grep TensorRT


DeepStream 使用 ONNX 模型

參考 How to deploy ONNX models on NVIDIA Jetson Nano using DeepStream
到 ONNX model zoo 找尋 Tiny YOLOv2 model 下載 tinyyolov2-8.onnx (Opset version 8)

Tiny YOLOv2 解說
  Input Shape (1x3x416x416)
  Output Shape (1x125x13x13)
    125 分成 5 個 BBox, 每個有 25 cells(32 bits)
    前 5 個是 bounding box, 後 20 是 class probabilities

$ git clone https://github.com/thatbrguy/Deep-Stream-ONNX.git
$ cd Deep-Stream-ONNX
$ mv ~/Downloads/tinyyolov2-8.onnx .
下載 sample.tar.gz 並解壓縮

$ vi config/config_infer_custom_yolo.txt
onnx-file=../tinyyolov2-8.onnx
parse-bbox-func-name=NvDsInferParseCustomYoloV2Tiny
custom-lib-path=../custom_bbox_parser/libnvdsinfer_custom_bbox_tiny_yolo.so

$ vi custom_bbox_parser/Makefile
SRCFILES:=nvdsparsebbox_tiny_yolo.cpp
TARGET_LIB:=libnvdsinfer_custom_bbox_tiny_yolo.so
DEEPSTREAM_PATH:=/opt/nvidia/deepstream/deepstream-5.0

$ vi custom_bbox_parser/nvdsparsebbox_tiny_yolo.cpp
    //assert (layer.dims.numDims == 3); 修改
    assert (layer.inferDims.numDims == 3);

extern "C" bool NvDsInferParseCustomYoloV2Tiny(
    std::vector<NvDsInferLayerInfo> const& outputLayersInfo,
    NvDsInferNetworkInfo const& networkInfo,
    NvDsInferParseDetectionParams const& detectionParams,
    std::vector<NvDsInferParseObjectInfo>& objectList);
outputLayersInfo: 為 ONNX 的輸出,  outputLayersInfo[0].buffer
networkInfo: 為 ONNX 模型的資訊
detectionParams: 有 config params 的資料,如 numClassesConfigured
objectList: 是我們要產生的輸出

測試
$ deepstream-app -c config/deepstream_app_custom_yolo.txt

2020年10月26日 星期一

Rtsp service on jetson nano

下載 test-launch.c v1.18
安裝必須套件
$ sudo apt-get install libgstrtspserver-1.0 libgstreamer1.0-dev
編譯
$ gcc test-launch.c -o test-launch $(pkg-config --cflags --libs gstreamer-1.0 gstreamer-rtsp-server-1.0)
若出現
undefined reference to `gst_rtsp_media_factory_set_enable_rtcp'
編譯失敗,刪除相關程式碼即可

測試啟動 service
$ ./test-launch "videotestsrc ! omxh265enc ! rtph265pay name=pay0 pt=96"
開啟畫面 (Jetson 平台)
$ gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! application/x-rtp, media=video ! queue ! decodebin ! nvvidconv ! nveglglessink
開啟畫面 (一般平台)
$ gst-launch-1.0 rtspsrc location=rtsp://192.168.0.105:8554/test ! application/x-rtp, media=video ! queue ! decodebin ! nvvideoconvert ! nveglglessink
使用 CSI 攝影機
$ ./test-launch "nvarguscamerasrc ! \
video/x-raw(memory:NVMM), format=NV12, width=640, height=480, framerate=30/1 ! \
nvvidconv ! video/x-raw ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"

2020年10月23日 星期五

DeepStream 安裝

安裝必要 packages
$ sudo apt install \
    libssl1.0.0 \
    libgstreamer1.0-0 \
    gstreamer1.0-tools \
    gstreamer1.0-plugins-good \
    gstreamer1.0-plugins-bad \
    gstreamer1.0-plugins-ugly \
    gstreamer1.0-libav \
    libgstrtspserver-1.0-0 \
    libjansson4

安裝驅動 參考 nvidia driver install on ubuntu
安裝 CUDA Toolkit 10.2
先到 https://developer.nvidia.com/cuda-downloads
尋找底下 Resources 的 Archive of Previous CUDA Releases
選擇 CUDA Toolkit 10.2
選擇 Linux/x86_64/Ubuntu/18.04/deb(local)
依據指令安裝,若已經安裝需要先移除
$ sudo apt-get remove cuda
$ sudo apt-cache madison cuda
$ sudo apt-get install cuda=10.2.89-1

安裝 TensorRT 7.0 https://developer.nvidia.com/nvidia-tensorrt-download
安裝 librdkafka
$ git clone https://github.com/edenhill/librdkafka.git
$ cd librdkafka
$ git reset --hard 7101c2310341ab3f4675fc565f64f0967e135a6a
$ ./configure
$ make
$ sudo make install
$ sudo mkdir -p /opt/nvidia/deepstream/deepstream-5.0/lib 
$ sudo cp /usr/local/lib/librdkafka* /opt/nvidia/deepstream/deepstream-5.0/lib

下載 DeepStreamSDK
https://developer.nvidia.com/deepstream-getting-started
$ sudo apt-get install ./deepstream-5.0_5.0.0-1_amd64.deb
$ sudo tar -xvf deepstream_sdk_v5.0.0_x86_64.tbz2 -C /
$ cd /opt/nvidia/deepstream/deepstream-5.0/
$ sudo ./install.sh
$ sudo ldconfig

若是出現 no such element factory 錯誤, 無法建立 Gst elements
$ rm $HOME/.cache/gstreamer-1.0/registry.x86_64.bin



nvidia driver install on ubuntu

查詢可用 driver
$ ubuntu-drivers devices
$ sudo ubuntu-drivers autoinstall
安裝可能失敗,查看有無(Depends)相依元件,手動安裝
$ sudo apt install 相依元件
若還有相依元件,就繼續安裝,最後
$ sudo ubuntu-drivers autoinstall
重新開機
$ sudo reboot

查看版本
$ nvidia-smi

2020年10月22日 星期四

戴口罩辨識

參考 face_mask_detection
參考 Nvidia TLT(Transfer Learning Toolkit) 轉移學習工具 安裝所需工具

參考資料集
  戴口罩
  無口罩
下載後依據目錄格式擺放

$ docker run --name face-mask --runtime=nvidia --gpus all -it -v /home/user/Data/DeepStreamer/tlt-experiments/:/workspace/tlt-experiments -p 8888:8888 nvcr.io/nvidia/tlt-streamanalytics:v2.0_py3 /bin/bash
root@0aa10155a3bd:/workspace/tlt-experiments# exit
exit
$ docker ps
CONTAINER ID        IMAGE                                         COMMAND                  CREATED             STATUS              PORTS                    NAMES
0aa10155a3bd        nvcr.io/nvidia/tlt-streamanalytics:v2.0_py3   "install_ngc_cli.sh …"   8 minutes ago       Up 6 minutes        0.0.0.0:8888->8888/tcp   face-mask
$ docker exec -it face-mask /bin/bash
# cd tlt-experiments/face-mask-detection
# python -m pip install -r requirements.txt
修改 face-mask-detection/tlt_specs/detectnet_v2_tfrecords_kitti_trainval.txt
root_directory_path: "/workspace/tlt-experiments/face-mask-detection-data/kitti/train"
修改 face-mask-detection/tlt_specs/detectnet_v2_train_resnet18_kitti.txt
tfrecords_path: "/workspace/tlt-experiments/face-mask-detection-data/kitti/tfrecords/kitti_trainval/*"
image_directory_path: "/workspace/tlt-experiments/face-mask-detection-data/kitti/train"
pretrained_model_file: "/workspace/tlt-experiments/detectnet_v2/pretrained_resnet18/tlt_pretrained_detectnet_v2_vresnet18/resnet18.hdf5"
修改 face-mask-detection/tlt_specs/detectnet_v2_retrain_resnet18_kitti.txt
tfrecords_path: "/workspace/tlt-experiments/face-mask-detection-data/kitti/tfrecords/kitti_trainval/*"
mage_directory_path: "/workspace/tlt-experiments/face-mask-detection-data/kitti/train"
pretrained_model_file: "/workspace/tlt-experiments/face-mask-detection-data/detectnet_v2/experiment_dir_pruned/resnet18_nopool_bn_detectnet_v2_pruned.tlt"
修改 face-mask-detection/tlt_specs/detectnet_v2_inference_kitti_tlt.txt
model: "/workspace/tlt-experiments/detectnet_v2/experiment_dir_retrain/weights/resnet18_detector_pruned.tlt"
修改 face-mask-detection/tlt_specs/detectnet_v2_inference_kitti_etlt.txt
trt_engine: "/workspace/tlt-experiments/detectnet_v2/experiment_dir_final_unpruned/resnet18_detector.trt"

產生 kitti 格式的標註
# vi data2kitti.sh
/usr/bin/python data2kitti.py --kaggle-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/Kaggle Medical Mask Dataset/medical-masks-dataset' --mafa-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/MAFA Dataset' --fddb-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/FDDB Dataset' --widerface-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/Wider Face Dataset' --kitti-base-path '/workspace/tlt-experiments/face-mask-detection-data/kitti' --train
/usr/bin/python data2kitti.py --kaggle-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/Kaggle Medical Mask Dataset/medical-masks-dataset' --mafa-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/MAFA Dataset' --fddb-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/FDDB Dataset' --widerface-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/Wider Face Dataset' --kitti-base-path '/workspace/tlt-experiments/face-mask-detection-data/kitti' --val
# ./data2kitti.sh
# jupyter notebook --ip 0.0.0.0 --allow-root
    To access the notebook, open this file in a browser:
        file:///root/.local/share/jupyter/runtime/nbserver-241-open.html
    Or copy and paste one of these URLs:
        http://0aa10155a3bd:8888/?token=29cbb63890972635e00320fe6c66bda30b2c81daa7400f3d
     or http://127.0.0.1:8888/?token=29cbb63890972635e00320fe6c66bda30b2c81daa7400f3d
用瀏覽器開啟 http://127.0.0.1:8888/?token=29cbb63890972635e00320fe6c66bda30b2c81daa7400f3d
進入 http://192.168.0.101:8888/notebooks/face-mask-detection.ipynb
修改下列環境變數到正確目錄
# Setting up env variables for cleaner command line commands.
%env KEY=tlt_encode
# User directory
%env USER_EXPERIMENT_DIR=/workspace/tlt-experiments/detectnet_v2
# Download directory
%env DATA_DOWNLOAD_DIR=/workspace/tlt-experiments/face-mask-detection-data/kitti
%env SPECS_DIR=/workspace/tlt-experiments/face-mask-detection/tlt_specs

產生 tfrecords
!tlt-dataset-convert -d $SPECS_DIR/detectnet_v2_tfrecords_kitti_trainval.txt \
                     -o $DATA_DOWNLOAD_DIR/tfrecords/kitti_trainval/kitti_trainval

查詢 tlt_pretrained_detectnet_v2
!ngc registry model list nvidia/tlt_pretrained_detectnet_v2:*
!mkdir -p $USER_EXPERIMENT_DIR/pretrained_resnet18/
下載 tlt_pretrained_detectnet_v2
!ngc registry model download-version nvidia/tlt_pretrained_detectnet_v2:resnet18 \
    --dest $USER_EXPERIMENT_DIR/pretrained_resnet18

訓練開始
!tlt-train detectnet_v2 -e $SPECS_DIR/detectnet_v2_train_resnet18_kitti.txt \
                        -r $USER_EXPERIMENT_DIR/experiment_dir_unpruned \
                        -k $KEY \
                        -n resnet18_detector

測試訓練結果
!tlt-evaluate detectnet_v2 -e $SPECS_DIR/detectnet_v2_train_resnet18_kitti.txt\
                           -m $USER_EXPERIMENT_DIR/experiment_dir_unpruned/weights/resnet18_detector.tlt \
                           -k $KEY

修剪模型, -ph (threshold), 越大產生的模型越小,推論速度越快,但會不準
!mkdir -p $USER_EXPERIMENT_DIR/experiment_dir_pruned
!tlt-prune -m $USER_EXPERIMENT_DIR/experiment_dir_unpruned/weights/resnet18_detector.tlt \
           -o $USER_EXPERIMENT_DIR/experiment_dir_pruned/resnet18_nopool_bn_detectnet_v2_pruned.tlt \
           -eq union \
           -pth 0.7 \
           -k $KEY

再次訓練修剪過的模型
!tlt-train detectnet_v2 -e $SPECS_DIR/detectnet_v2_retrain_resnet18_kitti.txt \
                        -r $USER_EXPERIMENT_DIR/experiment_dir_retrain \
                        -k $KEY \
                        -n resnet18_detector_pruned

測試再次訓練的模型
!tlt-evaluate detectnet_v2 -e $SPECS_DIR/detectnet_v2_retrain_resnet18_kitti.txt \
                           -m $USER_EXPERIMENT_DIR/experiment_dir_retrain/weights/resnet18_detector_pruned.tlt \
                           -k $KEY

利用模型推論測試的圖形
!tlt-infer detectnet_v2 -e $SPECS_DIR/detectnet_v2_inference_kitti_tlt.txt \
                        -o $USER_EXPERIMENT_DIR/tlt_infer_testing \
                        -i $DATA_DOWNLOAD_DIR/test/images \
                        -k $KEY

輸出可被 DeepStream 使用的模型
!mkdir -p $USER_EXPERIMENT_DIR/experiment_dir_final
# Removing a pre-existing copy of the etlt if there has been any.
import os
output_file=os.path.join(os.environ['USER_EXPERIMENT_DIR'],
                         "experiment_dir_final/resnet18_detector_thermal.etlt")
if os.path.exists(output_file):
    os.system("rm {}".format(output_file))
!tlt-export detectnet_v2 \
            -m $USER_EXPERIMENT_DIR/experiment_dir_retrain/weights/resnet18_detector_pruned.tlt \
            -o $USER_EXPERIMENT_DIR/experiment_dir_final/resnet18_detector_thermal.etlt \
            -k $KEY

修改 ds_configs/config_infer_primary_masknet_gpu.txt
tlt-encoded-model=/home/nano/Data/DeepStream/face-mask-detection/experiment_dir_final/resnet18_detector_thermal.etlt
# 不要用 model-engine-file
input-dims=3;480;640;0

修改 ds_configs/deepstream_app_source1_camera_masknet_gpu.txt
width=640
height=480

完成
$ deepstream-app -c deepstream_app_source1_camera_masknet_gpu.txt


2020年10月20日 星期二

Nvidia TLT(Transfer Learning Toolkit) 轉移學習工具


Transfer Learning 除了可以減少訓練時間,也可減少訓練所需的資料和標註
TLT 使用 docker 技術,可參考 Docker 安裝和使用
TLT 預先訓練一些模型,透過 NGC 下載,參考 NVIDIA GPU Cloud (NGC)
TLT 可以使用 docker 下的 Jupyter notebooks
TLT 產出的模型 .etlt 格式,可以在 DeepStream 和 TensorRT 下使用
Pruning 修剪模型中較不重要的 nodes, 減少記憶體的使用,加快推理
Augmenting Dataset 增強資料集,顏色變換,模糊,旋轉,縮放,翻轉,裁剪

TrafficCamNet 和 DashCamNet 都用於交通
TrafficCamNet 用於固定點,DashCamNet 用於移動的車上

PeopleNet 偵測 人,包包,臉
FaceDetectIR 專注於人臉

Transfer Learning Toolkit
下載預先訓練的模型
準備訓練和測試資料集
訓練模型
測試模型
(Prune)修剪模型
輸出模型

安裝步驟
確認 NVIDIA GPU driver 版本,v410.xx 以上,可使用
$ nvidia-smi
安裝 NVIDIA Docker 參考 nvidia-docker 和 Installing Docker CE on Ubuntu
$ curl https://get.docker.com | sh
開機自動執行 docker service
$ sudo systemctl start docker && sudo systemctl enable docker
新增 apt-key
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
$ sudo apt-get update
$ sudo apt-get install -y nvidia-docker2
$ sudo systemctl restart docker
測試是否安裝完成
$ sudo docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
參考 NVIDIA GPU Cloud (NGC),取得 API Key
$ docker login nvcr.io
Username: $oauthtoken
Password: YOUR_NGC_API_KEY
目前 TLT version 為 v2.0_py3
下載 TLT
$ docker pull nvcr.io/nvidia/tlt-streamanalytics:<version>
執行 TLT
$ docker run --runtime=nvidia -it nvcr.io/nvidia/tlt-streamanalytics:<version> /bin/bash
利用目錄對映,存取本地資料
$ docker run --runtime=nvidia -it -v /home/<username>/tlt-experiments:/workspace/tlt-experiments nvcr.io/nvidia/tlt-streamanalytics:<version> /bin/bash
使用網路 port 對映
$ docker run --runtime=nvidia -it -v /home/<username>/tlt-experiments:/workspace/tlt-experiments -p 8888:8888 nvcr.io/nvidia/tlt-streamanalytics:<version>
執行 jupyter notebook
$ cd examples/
$ jupyter notebook --ip 0.0.0.0 --allow-root

設定 NGC API Key
$ ngc config set
顯示 NGC 上的模組列表
$ ngc registry model list <model_glob_string>
例如:
$ ngc registry model list nvidia/tlt_pretrained_*
顯示 tlt_pretrained_detectnet_v2 的各種版本
$ ngc registry model list nvidia/tlt_pretrained_detectnet_v2:*
下載模組
$ ngc registry model download-version <ORG/model_name:version> -dest <path_to_download_dir>
例如:
$ ngc registry model download-version nvidia/tlt_pretrained_classification:resnet18 --dest $USER_EXPERIMENT_DIR/pretrained_resnet18

Object Detection 物件偵測,label 檔使用 KITTI 格式
但只使用了 Class names 和 Bounding box



2020年10月19日 星期一

NVIDIA GPU Cloud (NGC)

參考 NVIDIA GPU CLOUD DOCUMENT

註冊, 啟動你的 NGC 帳號
1. 開啟網址 NGC Catalog landing page, 從右上角 Welcome Guest, 選擇 Create an Account
2. 輸入名字,郵件地址,同意使用條款,按下 Sign Up,收取信件,依據指示設定帳號
3. 依據郵件,開啟設定密碼網頁
4. 依據指示設定密碼,每次可由 https://ngc.nvidia.com/signin/email 網頁輸入郵件地址和密碼登入
5. 同意使用條款

產生 NGC API Key
1. 登入 https://ngc.nvidia.com/signin/email
2. 從右上角的 使用者帳號, 選擇 Setup
3. 按下 Get API Key
4. 按下 Confirm 產生密鑰
密鑰產生一遍即可,新產生的密鑰,會使舊的失效

安裝 NGC Registry CLI(command-line interfaces)
1. 登入 NGC website
2. 從右上角的 使用者帳號, 選擇 Setup,
3. 按下 Install NGC CLI 的 Downloads
4. 選擇安裝平台,依據命令安裝, 如下
AMD64 Linux
下載
$ wget -O ngccli_cat_linux.zip https://ngc.nvidia.com/downloads/ngccli_cat_linux.zip && unzip -o ngccli_cat_linux.zip && chmod u+x ngc
檢查下載檔案
$ md5sum -c ngc.md5
加入命令到環境變數 PATH
$ echo "export PATH=\"\$PATH:$(pwd)\"" >> ~/.bash_profile && source ~/.bash_profile
設定 API Key
$ ngc config set

ngc 命令說明
顯示版本
$ ngc --version
顯示可用選項
$ ngc -h
顯示 registry image 的選項
$ ngc registry image -h
顯示 registry image info 的選項
$ ngc registry image info -h


2020年10月16日 星期五

DeepStream performance 測量

1. 直接取得時間
gst_pad_add_probe (pad, GST_PAD_PROBE_TYPE_BUFFER, GstPadProbeCallback, NULL, NULL);

GstPadProbeReturn *GstPadProbeCallback(GstPad *pad, GstPadProbeInfo *info, gpointer user_data)
{
    GstBuffer *buf = (GstBuffer *) info->data;
    if (GST_BUFFER_DTS_IS_VALID(buf)) {
        GstClockTime dts = GST_BUFFER_DTS(buf);
        g_print("dts=%ldms ", dts/1000000);
    }
    if (GST_BUFFER_PTS_IS_VALID(buf)) {
        GstClockTime pts = GST_BUFFER_PTS(buf);
        g_print("pts=%ldms ", pts/1000000);
    }
    if (GST_BUFFER_DURATION_IS_VALID(buf)) {
        GstClockTime duration = GST_BUFFER_DURATION(buf);
        g_print("duration=%ldms ", duration/1000000);
    }
    g_print("\n");
}

GStreamer 的 textoverlay & timeoverlay

使用 textoverlay 顯示文字
$ gst-launch-1.0 -v videotestsrc ! textoverlay text="Room A" valignment=top halignment=left font-desc="Sans, 72" ! nvvideoconvert ! nveglglessink

使用 textoverlay 顯示字幕,timeoverlay 顯示時間
字幕檔
$ cat subtitles.srt 
 1
 00:00:03,000 --> 00:00:05,000
 Hello? (3-5s)

 2
 00:00:08,000 --> 00:00:13,000
 Yes, this is a subtitle. Don't
 you like it? (8-13s)

 3
 00:00:18,826 --> 00:01:02,886
 Uh? What are you talking about?
 I don't understand  (18-62s)

$ gst-launch-1.0 -v filesrc location=subtitles.srt ! subparse ! txt.   videotestsrc ! timeoverlay font-desc="Sans, 24" ! textoverlay name=txt shaded-background=yes font-desc="Sans, 24" ! nvvideoconvert ! nveglglessink

2020年10月15日 星期四

DeepStream 之 nvdsosd

nvosd = gst_element_factory_make ("nvdsosd", "nv-onscreendisplay");

不要顯示物件框框
g_object_set (G_OBJECT (nvosd), "display-bbox", 0, NULL);

不要顯示物件說明
g_object_set (G_OBJECT (nvosd), "display-text", 0, NULL);

左上角顯示時間
g_object_set (G_OBJECT (nvosd), "display-clock", 1, NULL);

osd_sink_pad = gst_element_get_static_pad (nvosd, "sink");
gst_pad_add_probe (osd_sink_pad, GST_PAD_PROBE_TYPE_BUFFER,
GstPadProbeCallback, (gpointer)sink, NULL);
gst_object_unref (osd_sink_pad);

static GstPadProbeReturn GstPadProbeCallback(
  GstPad * pad, GstPadProbeInfo * info, gpointer u_data)
{
  GstBuffer *buf = (GstBuffer *) info->data;
  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);
  for (l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next) 
  {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
    for (l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next) 
    {
      obj_meta = (NvDsObjectMeta *) (l_obj->data);
      // NvDsObjectMeta 宣告於 sources/includes/nvdsmeta.h
      // class_id: Holds the index of the object class inferred by the primary detector/classifier.
      // object_id: Holds a unique ID for tracking the object.
      //   UNTRACKED_OBJECT_ID indicates that the object has not been tracked.
      // rect_params: 物件的框框, 有可能來自 tracker
      // text_params: 物件說明 "主分類 track 類別1 類別2..."
      // obj_label: 主分類
      for (GList *l = obj_meta->classifier_meta_list; l != NULL; l = l->next)
      {
        NvDsClassifierMeta *classifierMeta = (NvDsClassifierMeta *) (l->data);
        // NvDsClassifierMeta 宣告於 sources/includes/nvdsmeta.h
        // unique_component_id: 分類 id
        NvDsLabelInfo *pLabel = get_possible_label(classifierMeta);
      }
    }
  }
}

NvDsLabelInfo *get_possible_label (NvDsClassifierMeta * classifierMeta)
{
  NvDsLabelInfo *pLabel = NULL;
  for (GList *n = classifierMeta->label_info_list; n != NULL; n = n->next) {
    NvDsLabelInfo *labelInfo = (NvDsLabelInfo *) (n->data);
    // NvDsLabelInfo 宣告於 sources/includes/nvdsmeta.h
    // result_class_id: 最佳的 分類id
    // result_prob: 可能性
    // result_label: 分類名
    // pResult_label: 當 result_label 太長, 放不下時, 放這裡
    if (pLabel == NULL) {
        pLabel = labelInfo;
    } else if (labelInfo->result_prob > pLabel->result_prob) {
        pLabel = labelInfo;
    }
  }
  return pLabel;
}

2020年10月14日 星期三

deepstream-test4-app 學習紀錄

參考 deepstream-test5 學習紀錄

啟動 zookeeper
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/zookeeper-server-start.sh config/zookeeper.properties

啟動 kafka
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/kafka-server-start.sh config/server.properties

接收觸發事件
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic ObjDetect

$ ./deepstream-test4-app -i /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 -p /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_kafka_proto.so --conn-str="localhost;9092;ObjDetect"


2020年10月13日 星期二

GStreamer 學習紀錄 gst-launch-1.0

利用 gst-launch-1.0 同時撥放影像和聲音

$ gst-launch-1.0 filesrc location="/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4" ! qtdemux name=demux demux.audio_0 ! queue ! avdec_aac ! audioconvert ! alsasink demux.video_0 ! queue ! h264parse ! omxh264dec ! nveglglessink

使用 decodebin
$ gst-launch-1.0 filesrc location="/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4" ! decodebin name=d d. ! nvvidconv ! nveglglessink d. ! queue ! audioconvert ! audioresample ! alsasink

$ gst-launch-1.0 filesrc location="/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4" ! decodebin name=d d.src_0 ! nvvidconv ! nveglglessink d.src_1 ! queue ! audioconvert ! audioresample ! alsasink

使用 uridecodebin
nano@nano-desktop:~$ gst-launch-1.0 uridecodebin uri="file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4" name=d d. ! nvvidconv ! nveglglessink d. ! queue ! audioconvert ! audioresample ! alsasink

只撥放影片
$ gst-launch-1.0 filesrc location="/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4" ! decodebin name=d d.src_0 ! nvvidconv ! nveglglessink

2020年10月7日 星期三

deepstream-test5 學習紀錄

參考 kafka 學習紀錄 (操作)
參考 GStreamer
參考 NVIDIA DeepStream SDK Developer Guide

啟動 zookeeper
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/zookeeper-server-start.sh config/zookeeper.properties

啟動 kafka
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/kafka-server-start.sh config/server.properties

$ vi configs/test5_config_file_src_infer.txt
[sourceX]
# type 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CAMERA_CSI
type=4
uri=rtsp://user:passwd@192.168.0.xxx:554/chID=1&streamType=main&linkType=tcp
smart-record=1
# 可透過下列命令觸發停止錄影
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic Record
# >{"command":"start-recording","sensor":{"id":"0"}}
# >{"command":"stop-recording","sensor":{"id":"0"}}
smart-record=2
# 每10秒觸發一次 start / stop
[sink1]
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
msg-broker-conn-str=localhost;9092
topic=ObjDetect
[message-consumer0]
subscribe-topic-list=Record

好用的除錯訊息,宣告於
deepstream-5.0/sources/apps/apps-common/includes/deepstream_common.h
NVGSTDS_ERR_MSG_V
NVGSTDS_INFO_MSG_V
NVGSTDS_WARN_MSG_V
也可使用 GStreamer 學習紀錄 的列印除錯訊息
也可使用 GLib Reference Manual 的列印除錯訊息

執行命令
nano@nano-desktop:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5a$ ./deepstream-test5-app -c configs/test5_config_file_src_infer.txt 
在執行命令的 terminal 中可用鍵盤下令
h: Print this help
q: Quit
p: Pause
r: Resume

使錄影命令不用填寫時間
vi deepstream-5.0/sources/apps/apps-common/src/deepstream_c2d_msg_util.c
NvDsC2DMsg*
nvds_c2d_parse_cloud_message (gpointer data, guint size)
{
...
    if (startRec) {
      time_t startUtc, endUtc, curUtc;
      const gchar *timeStr;
      if (json_object_has_member (object, "start")) {
...
      } else {
        start = 0;
        NVGSTDS_WARN_MSG_V ("wrong message format, missing 'start' field.");
        //goto error;
      }
...
}

接收觸發事件
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic ObjDetect --from-beginning

發送錄影,停止錄影命令
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic Record
>{"command":"start-recording","sensor":{"id":"0"}}
>{"command":"stop-recording","sensor":{"id":"0"}}

vi deepstream-5.0/sources/apps/apps-common/src/deepstream_source_bin.c
create_camera_source_bin
create_uridecode_src_bin
create_rtsp_src_bin


2020年10月5日 星期一

send external IP from gmail


要申請 GMail 應用程式密碼,需要開啟 兩步驟驗證
選取應用程式時,雖然有郵件,但要選擇 其他

#!/usr/bin/python3
import urllib.request
from ipaddress import ip_address
import smtplib

def GetExternalIP():
    try:
        ip_b = urllib.request.urlopen("https://api.ipify.org").read()
    except:
        return ''
    ip_s = ip_b.decode('utf-8')
    try:
        ip_address(ip_s)
        return ip_s
    except:
        return ''
        
def GetBackupIP():
    try:
        file = open('/tmp/ExternalIp.txt', 'r')
        ip = file.readline()
        file.close()
    except:
        return ''
    return ip

def WriteBackupIP(ip):
    try:
        file = open('/tmp/ExternalIp.txt', 'w')
        file.write(ip)
        file.close()
    except:
        return

def SendMail(ip):
    smtp = smtplib.SMTP('smtp.gmail.com', 587)
    smtp.ehlo()
    smtp.starttls()
    smtp.login('name@gmail.com', 'lkmvemxzkegzrhwqj')
    from_addr = 'name@gmail.com'
    to_addr = 'name@yahoo.com.tw'
    msg = 'Subject:External IP\n' + ip + '\n'
    status = smtp.sendmail(from_addr, to_addr, msg)
    if status == {}:
        print('SendMail success!')
    else:
        print('SendMail fail!')
    smtp.quit

external_ip = GetExternalIP()
print('external ip {}'.format(external_ip))
backup_ip = GetBackupIP()
print('backup ip {}'.format(backup_ip))
if external_ip != '':
    if external_ip != backup_ip:
        SendMail(external_ip)
WriteBackupIP(external_ip)

GStreamer 學習紀錄

playbin 會自動使用 autoaudiosink 和 autovideosink
下列程式碼可使用自己要的 sink

  pipeline =
      gst_parse_launch(
      "playbin uri=file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4",
      NULL);
  GstElement *sink = gst_element_factory_make ("nveglglessink", "sink");
  g_object_set(G_OBJECT(pipeline), "video-sink", sink, NULL);


或者調整 registry rank
    GstRegistry *plugins_register = gst_registry_get();
    GstPluginFeature *nveglglessink = gst_registry_lookup_feature(plugins_register, "nveglglessink");
    if (nveglglessink == NULL) {
      g_printerr("gst_registry_lookup_feature return NULL \n");
      return 0;
    }
    guint nveglglessink_rank = gst_plugin_feature_get_rank(nveglglessink);
    g_print("nveglglessink rank=%u\n", nveglglessink_rank);
    gst_plugin_feature_set_rank(nveglglessink, GST_RANK_PRIMARY + 11);
    gst_object_unref(nveglglessink);

使用 gst-launch-1.0
$ gst-launch-1.0 playbin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm video-sink=nveglglessink

2020年9月29日 星期二

GStreamer 學習紀錄-除錯

列印除錯訊息
GST_ERROR()
GST_WARNING()
GST_INFO()
GST_DEBUG()
GST_LOG()

依據環境變數 GST_DEBUG
0: none, 所有訊息皆不印
1: ERROR
2: WARNING
4: INFO
5: DEBUG
6: LOG

產生 pipeline 圖
設定環境變數 GST_DEBUG_DUMP_DOT_DIR=file_path
程式碼在 gst_bus_timed_pop_filtered 之後加入
GST_DEBUG_BIN_TO_DOT_FILE(GST_BIN(pipeline), GST_DEBUG_GRAPH_SHOW_MEDIA_TYPE, "file_name");
執行程式後會產生 file_path/file_name.dot
執行下列命令產生 png 檔
$ dot -Tpng -o file_name.png file_name.dot

2020年9月28日 星期一

kafka 學習紀錄(架構名詞解說)

參考 Kaggle的解題挑戰 - Kafka簡介

Kafka: 分散式的訊息處理系統
Zookeeper: 底層的數據儲存
Broker: 接收來自 Producer 的訊息,交 Zookeeper 儲存,並回應 Consumer 的資料請求
Kafka Cluster: 由數個 Broker 組成
Producer: 訊息生產者,負責發布訊息到 kafka
Consumer: 訊息消費者,將訊息由 kafka 讀出
Topic: 訊息的種類,可視為 Producer 和 Consumer 的 Queue
Partition: Topic 分為數個 Partitions,同一個 Topic 的 Partition, 可以不在一個機器上
Retention: 設定存放在磁碟的時間
Multiple Clusters: Kafka 支援 Multiple Cluster,主要可以提高可用性和安全性
Consumer Group: 是由一個或多個消費者實例(Consumer Instance)組成的羣組,具有可擴展性和可容錯性的一種機制。消費者組內的消費者共享一個消費者組 ID,這個 ID 也叫做 Group ID,組內的消費者共同對一個主題進行訂閱和消費,同一個組中的消費者只能消費一個分區的消息

kafka 學習紀錄 (操作)

參考 Kaggle的解題挑戰 - Kafka實作

安裝
1. 到 kafka 網站 APACHE KAFKA
3. 下載 Binary download
nano@nano-desktop:~/Data/DeepStream$ tar -xzf kafka_2.13-2.6.0.tgz
nano@nano-desktop:~/Data/DeepStream$ cd kafka_2.13-2.6.0/

啟動 zookeeper
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/zookeeper-server-start.sh config/zookeeper.properties

啟動 kafka
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/kafka-server-start.sh config/server.properties

查詢目前 topic
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/kafka-topics.sh --list --zookeeper localhost:2181

建立 topic test
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

從 topic test 接收訊息
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning

傳送訊息到 topic test
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
>

2020年9月25日 星期五

accuracy, precision, recall 的理解

預設真假 和 事實的真假
TP(True Positive): 事實為真,預測為真
FN(False Negative): 事實為真,預測為假
FP(False Positive): 事實為假,預測為真
TN(True Negative): 事實為假,預測為假

Accuuuracy = (TP+TN) / (TP+TN+FP+FN)
正確率:在所有情況中,正確預測的比率

Precision = (TP) / (TP+FP)
精確率:預測為真的情況中,有多少是真

Recall = (TP) / (TP+FN)
召回率:為真的情況下,有多少預測為真

Precision 高, Recall 低:捉到的大部分是 真的,但會漏掉 真的
Precision 低, Recall 高:真的 大部分會被捉到,但會有不少 假的

2020年9月15日 星期二

Youtube 使用 GStreamer

參考 How to watch Youtube videos with Gstreamer

安裝
$ pip install youtube-dl

取得可用格式
$ youtube-dl --list-formats youtube_link

取得 url
$ youtube-dl --format mp4 --get-url youtube_link

利用 GStreamer 撥放
$ gst-launch-1.0 playbin uri="$(youtube-dl --format mp4 --get-url youtube_link)"

在程式中利用 youtube-dl
    GError *error = NULL;
    GSubprocess *youtube_dl = g_subprocess_new (G_SUBPROCESS_FLAGS_STDOUT_PIPE, &error, 
        "/home/nano/.local/bin/youtube-dl", 
        //"--format", "mp4",
        "-f", "18",
        //"--get-url", "https://www.youtube.com/watch?v=gMd8bLJLsiw", 
        "--get-url", "https://www.youtube.com/watch?v=2P_74GLABBY", 
        NULL);
    if (error) {
      g_error ("Could Not Launch Sub-Proess youtube-dl");
      return -1;
    }
    GInputStream *istream = g_subprocess_get_stdout_pipe (youtube_dl);
    gchar buffer[1024];
    gssize urlSize = g_input_stream_read(istream, buffer, sizeof(buffer), NULL, &error);
    if (error || urlSize == sizeof(buffer)) {
      g_print("error g_input_stream_read\n");
      return -1;
    }
    buffer[urlSize] = 0;
    g_print("youtube_dl:uri %s\n", buffer);
    /* Set the URI to play */
    g_object_set (data.playbin, "uri", buffer, NULL);


2020年9月8日 星期二

GStreamer 安裝

依據說明文件安裝
sudo apt-get install libgstreamer1.0-0
sudo apt-get install gstreamer1.0-plugins-base
sudo apt-get install gstreamer1.0-plugins-good
sudo apt-get install gstreamer1.0-plugins-bad
sudo apt-get install gstreamer1.0-plugins-ugly
sudo apt-get install gstreamer1.0-libav
sudo apt-get install gstreamer1.0-doc
sudo apt-get install gstreamer1.0-tools
sudo apt-get install gstreamer1.0-x
sudo apt-get install gstreamer1.0-alsa
sudo apt-get install gstreamer1.0-gl
sudo apt-get install gstreamer1.0-gtk3
sudo apt-get install gstreamer1.0-qt5
sudo apt-get install gstreamer1.0-pulseaudio

還需另外安裝
sudo apt-get install libgstreamer1.0-dev
sudo apt-get install libgstreamer-plugins-base1.0-dev
sudo apt-get install libgtk2.0-dev
sudo apt-get install clutter-gst-3.0
sudo apt-get install libclutter-gst-3.0-0
sudo apt-get install libclutter-gst-3.0-dev

下載教學範例
git clone https://gitlab.freedesktop.org/gstreamer/gst-docs

Makefile 中
PKGS := gstreamer-1.0 gtk+-2.0 clutter-gst-3.0
CFLAGS += `pkg-config --cflags $(PKGS)`
LIBS += `pkg-config --libs $(PKGS)`

$(MAKECMDGOALS).o: $(MAKECMDGOALS).c Makefile
$(CC) -c -o $@ $(CFLAGS) $<

$(MAKECMDGOALS): $(MAKECMDGOALS).o Makefile
$(CC) -o $@ $@.o $(LIBS)

2020年9月2日 星期三

為在 jetson nano 的 conky 加速

conky 若是利用 exec, execp 等命令,會大量耗費資源
之前使用 execi 執行 tegrastats 取得一些系統資訊
直接改用 C, 編寫出 jetson_fan, 
jetson_temp_ao, jetson_temp_cpu, jetson_temp_gpu,
jetson_power_cpu, jetson_power_gpu, jetson_power_all
等變數
$ vi .conkyrc
RAM:${jetson_ram_usage}${alignr}${jetson_ram}
RAM lfb:${alignr}${jetson_ram_lfb}
SWAP:${jetson_swap_usage}${alignr}${jetson_swap}
SWAP Cached:${alignr}${jetson_swap_cached}
EMC Bandwidth:${jetson_emc} ${jetson_emc_bar}
CPU0:${jetson_cpu_frq 0} ${jetson_cpu_bar 0}
CPU1:${jetson_cpu_frq 1} ${jetson_cpu_bar 1}
CPU2:${jetson_cpu_frq 2} ${jetson_cpu_bar 2}
CPU3:${jetson_cpu_frq 3} ${jetson_cpu_bar 3}
GPU  ${jetson_gpu} ${jetson_gpu_bar}
${font :blod:size=12}Temperature ${hr}
${font :normal:size=10}Thermal:${alignr} ${jetson_temp_thermal}
PLL:${alignr} ${jetson_temp_pll}
CPU:${alignr} ${jetson_temp_cpu}
Wifi:${alignr} ${jetson_temp_wifi}
PMIC:${alignr} ${jetson_temp_pmic}
GPU:${alignr} ${jetson_temp_gpu}
AO:${alignr} ${jetson_temp_ao}
Fan Speed:${jetson_fan} ${jetson_fan_bar}




以下是 conky source 的 patch

2020年9月1日 星期二

conky in jetson nano


安裝相依套件
$ sudo apt-get install cmake
$ sudo apt-get install libimlib2-dev
$ sudo apt-get install libncurses5-dev
$ sudo apt-get install libx11-dev
$ sudo apt-get install libxdamage-dev
$ sudo apt-get install libxft-dev
$ sudo apt-get install libxinerama-dev
$ sudo apt-get install libxml2-dev
$ sudo apt-get install libxext-dev
$ sudo apt-get install libcurl4-openssl-dev
$ sudo apt-get install liblua5.3-dev

確認 cmake 版本要 >=3.8
$ cmake --version

2020年8月31日 星期一

在桌面顯示 主機資訊

參考 Ubuntu 20.04 System Monitoring with Conky widgets, Ubuntu 18.04也可用
$ sudo apt-get install conky-all
開始/Startup Applications
加入 /usr/bin/conky
$ sudo reboot
$ cp /etc/conky/conky.conf ~/.conkyrc
$ vi ~/.conkyrc
conky.config = {
update_interval = 1,
cpu_avg_samples = 2,
net_avg_samples = 2,
out_to_console = false,
override_utf8_locale = true,
double_buffer = true,
no_buffers = true,
text_buffer_size = 32768,
imlib_cache_size = 0,
own_window = true,
own_window_type = 'normal',
own_window_argb_visual = true,
own_window_argb_value = 80,
own_window_hints = 'undecorated,below,sticky,skip_taskbar,skip_pager',
border_inner_margin = 5,
border_outer_margin = 0,
xinerama_head = 1,
alignment = 'bottom_right',
gap_x = 0,
gap_y = 33,
draw_shades = false,
draw_outline = false,
draw_borders = false,
draw_graph_borders = false,
use_xft = true,
font = 'Ubuntu Mono:size=12',
xftalpha = 0.8,
uppercase = false,
default_color = 'white',
own_window_colour = '#000000',
minimum_width = 300, minimum_height = 0,
alignment = 'top_right',

};
conky.text = [[
${font sans-serif:bold:size=16}${time %Y-%m-%d}${alignr}${time %H:%M:%S}${font}
${font sans-serif:bold:size=10}SYSTEM ${hr 2}
${font sans-serif:normal:size=8}$sysname $kernel $alignr $machine
Host:$alignr$nodename
Uptime:$alignr$uptime
File System: $alignr${fs_type}
Processes: $alignr ${execi 1000 ps aux | wc -l}

${font sans-serif:bold:size=10}CPU ${hr 2}
${font sans-serif:normal:size=8}${execi 1000 grep model /proc/cpuinfo | cut -d : -f2 | tail -1 | sed 's/\s//'}
${font sans-serif:normal:size=8}CPU:${cpugraph cpu0 50}
CPU: ${cpu cpu0}% ${cpubar cpu0}

${font sans-serif:bold:size=10}Nvidia GPU ${hr 2}
${font sans-serif:normal:size=8}${execpi 1000 (nvidia-smi --query-gpu=gpu_name --format=csv,noheader)}
Temperature:${alignr}${execpi 3 (nvidia-smi --query-gpu=temperature.gpu --format=csv,noheader)}°C
Fan Speed:${alignr}${execpi 3 (nvidia-smi --query-gpu=fan.speed --format=csv,noheader)}
Utilization:${alignr}${execpi 3 (nvidia-smi --query-gpu=utilization.gpu --format=csv,noheader)}
Power:${alignr}${execpi 3 (nvidia-smi --query-gpu=power.draw --format=csv,noheader,nounits)}/${execpi 10 (nvidia-smi --query-gpu=power.default_limit --format=csv,noheader)}
Memory:${alignr}${execpi 3 (nvidia-smi --query-gpu=memory.used --format=csv,noheader,nounits)}/${execpi 10 (nvidia-smi --query-gpu=memory.total --format=csv,noheader)}
${font Courier-New:normal:size=8}PID${alignc}Name${alignr}Memory
${execpi 3 (nvidia-smi -q|tr -d " "|awk -F ":" '/ProcessID/{printf $2"\t";getline;getline;printf "%-28s",substr($2, 1, 28)"\t";getline;printf $2"\n"}')}

${font sans-serif:bold:size=10}MEMORY ${hr 2}
${font sans-serif:normal:size=8}RAM $alignc $mem / $memmax $alignr $memperc%
$membar
SWAP $alignc ${swap} / ${swapmax} $alignr ${swapperc}%
${swapbar}

${font sans-serif:bold:size=10}DISK USAGE ${hr 2}
${font sans-serif:normal:size=8}/ $alignc ${fs_used /} / ${fs_size /} $alignr ${fs_used_perc /}%
${fs_bar /}

${font Ubuntu:bold:size=10}NETWORK ${hr 2}
${font sans-serif:normal:size=8}Local IPs:${alignr}External IP:
${execi 1000 ip a | grep inet | grep -vw lo | grep -v inet6 | cut -d \/ -f1 | sed 's/[^0-9\.]*//g'}  ${alignr}${execi 1000  wget -q -O- http://ipecho.net/plain; echo}
${font sans-serif:normal:size=8}Down: ${downspeed eno1}  ${alignr}Up: ${upspeed eno1} 
${color lightgray}${downspeedgraph eno1 80,130 } ${alignr}${upspeedgraph eno1 80,130 }$color
${font sans-serif:bold:size=10}TOP PROCESSES ${hr 2}
${font sans-serif:normal:size=8}Name $alignr PID   CPU%   MEM%${font sans-serif:normal:size=8}
${top name 1} $alignr ${top pid 1} ${top cpu 1}% ${top mem 1}%
${top name 2} $alignr ${top pid 2} ${top cpu 2}% ${top mem 2}%
${top name 3} $alignr ${top pid 3} ${top cpu 3}% ${top mem 3}%
${top name 4} $alignr ${top pid 4} ${top cpu 4}% ${top mem 4}%
${top name 5} $alignr ${top pid 5} ${top cpu 5}% ${top mem 5}%
${top name 6} $alignr ${top pid 6} ${top cpu 6}% ${top mem 6}%
${top name 7} $alignr ${top pid 7} ${top cpu 7}% ${top mem 7}%
${top name 8} $alignr ${top pid 8} ${top cpu 8}% ${top mem 8}%
${top name 9} $alignr ${top pid 9} ${top cpu 9}% ${top mem 9}%
${top name 10} $alignr ${top pid 10} ${top cpu 10}% ${top mem 10}%
]];


其中花費最多時間的是
nvidia-smi -q|tr -d " "|awk -F ":" '/ProcessID/{printf $2"\t";getline;getline;printf "%-28s",substr($2, 1, 28)"\t";getline;printf $2"\n"}'
增加 Nvidia GPU 資料

2020年8月27日 星期四

學習 How to configure your NVIDIA Jetson Nano for Computer Vision and Deep Learning

參考 How to configure your NVIDIA Jetson Nano for Computer Vision and Deep Learning

更新 system level packages
$ sudo apt-get update
$ sudo apt-get upgrade

安裝 system-level 相關套件
$ sudo apt-get install git
$ sudo apt-get install cmake
$ sudo apt-get install libatlas-base-dev
$ sudo apt-get install gfortran
$ sudo apt-get install libhdf5-serial-dev
$ sudo apt-get install hdf5-tools
$ sudo apt-get install python3-dev
$ sudo apt-get install locate
$ sudo apt-get install libfreetype6-dev
$ sudo apt-get install python3-setuptools
$ sudo apt-get install protobuf-compiler
$ sudo apt-get install libprotobuf-dev
$ sudo apt-get install openssl
$ sudo apt-get install libssl-dev
$ sudo apt-get install libcurl4-openssl-dev
$ sudo apt-get install cython3
$ sudo apt-get install libxml2-dev
$ sudo apt-get install libxslt1-dev

2020年8月25日 星期二

Jetson Nano darknet

參考 Nvidia Jetson Nano 使用心得

$ git clone https://github.com/pjreddie/darknet.git
出現下列錯誤
error: 'CUDNN_CONVOLUTION_FWD_SPECIFY_WORKSPACE_LIMIT' undeclared

改為使用
$ git clone https://github.com/AlexeyAB/darknet
$ cd darknet
$ vi Makefile
GPU=1
CUDNN=1
CUDNN_HALF=1
OPENCV=1
LIBSO=1

$ make -j4

參考 AlexeyAB/darknet
cfg 和 weights 由 YOLOv4 model zoo 下載
yolov4-tiny.weights (建議使用)
yolov4-leaky-416.weight
yolov4-mish-416.weight


nano@nano-desktop:~/Data/darknet/darknet.AlexeyAB$ ./darknet detector demo ../cfg/coco.data ../cfg/yolov4-leaky-416.cfg ../weights/yolov4-leaky-416.weights 'nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink'

nano@nano-desktop:~/Data/darknet/darknet.AlexeyAB$ python3 darknet_video.py --weights ../weights/yolov4-leaky-416.weights --data_file ../cfg/coco.data --config_file ../cfg/yolov4-leaky-416.cfg --input 'nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)416, height=(int)416,format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink' --ext_output

需要按 Enter 才會進下一個畫面



Jetson Nano 遠端桌面

原先都是使用 x11vnc
參考 vidia Jetson AGX Xavier 設定 VNC

但常常斷線,太不方便了

$ sudo vi /usr/share/glib-2.0/schemas/org.gnome.Vino.gschema.xml
在 <schema> 的後面增加
    <key name='enabled' type='b'>
      <summary>Enable remote access to the desktop</summary>
      <description>
        If true, allows remote access to the desktop via the RFB
        protocol. Users on remote machines may then connect to the
        desktop using a VNC viewer.
      </description>
      <default>false</default>
    </key>

編譯 Gnome schemas
$ sudo glib-compile-schemas /usr/share/glib-2.0/schemas
桌面/System Settings/Desktop Sharing
勾選 Allow other users to view your desktop
勾選 Allow other users to control your desktop
桌面/搜尋/Startup Applications
按 Add
Name: Vino
Command: /usr/lib/vino/vino-server
Comment: VNC Server
$ gsettings set org.gnome.Vino require-encryption false
$ gsettings set org.gnome.Vino prompt-enabled false
$ sudo reboot

變更桌面解析度
$ sudo xrandr --fb 1280x960


Build OpenCV on Jetson NANO with CUDA CUDNN

參考 How to configure your NVIDIA Jetson Nano for Computer Vision and Deep Learning

make 時出現下列錯誤
error: 'CUDNN_CONVOLUTION_FWD_PREFER_FASTEST' was not declared in this scope
error: 'CUDNN_CONVOLUTION_BWD_PREFER_FASTEST' was not declared in this scope
原因為 OpenCV 4.4 以前,並不支援 CUDNN8.0

CUDNN 8.0 只能在 OpenCV 4.4 才能 build 成功
$ git clone https://github.com/opencv/opencv.git -b 4.4.0 --depth 1
$ git clone https://github.com/opencv/opencv_contrib.git -b 4.4.0 --depth 1


刪除舊的安裝
$ sudo apt purge libopencv-dev libopencv-python libopencv-samples libopencv*
$ sudo apt update

參考 Any OCR model run in Jetson Nano
參考 Installing tesseract 4.0 on Ubuntu 16.04
$ sudo apt install tesseract-ocr libtesseract-dev


$ sudo add-apt-repository ppa:alex-p/tesseract-ocr
$ sudo apt-get update
$ sudo apt-get install libleptonica-dev
$ sudo apt-get install qt5-default
$ sudo apt-get install qtcreator

cmake \
 -D CMAKE_BUILD_TYPE=RELEASE \
 -D CMAKE_INSTALL_PREFIX=/usr/local \
 -D WITH_CUDA=ON \
 -D WITH_CUDNN=ON \
 -D WITH_CUBLAS=ON \
 -D WITH_V4L=ON \
 -D CUDNN_VERSION='8.0' \
 -D CUDNN_INCLUDE_DIR='/usr/include' \
 -D OPENCV_DNN_CUDA=ON \
 -D WITH_GTK=ON \
 -D WITH_QT=ON \
 -D INSTALL_C_EXAMPLES=OFF \
 -D WITH_GSTREAMER=ON \
 -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-4.4.0/modules/ \
 -D OPENCV_ENABLE_NONFREE=ON \
 -D BUILD_TESTS=OFF \
 -D BUILD_PERF_TESTS=OFF \
 -D BUILD_EXAMPLES=OFF \
 -D INSTALL_PYTHON_EXAMPLES=ON \
 -D PYTHON_EXECUTABLE=$(which python3) \
 -D BUILD_opencv_python2=OFF \
 -D BUILD_opencv_python3=ON \
 -D BUILD_opencv_java=OFF \
 -D PYTHON3_EXECUTABLE=$(which python3) \
 -D PYTHON3_INCLUDE_DIR=$(python3 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
 -D PYTHON3_PACKAGES_PATH=$(python3 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") \
 -D Tesseract_INCLUDE_DIR=/usr/include/tesseract \
 -D Tesseract_LIBRARY=/usr/lib/aarch64-linux-gnu/libtesseract.so \
 -D Lept_LIBRARY=/usr/lib/aarch64-linux-gun/liblept.so \
 -D CUDA_ARCH_PTX="" \
 -D CUDA_ARCH_BIN="5.3,6.2,7.2" \
 -D OPENCV_GENERATE_PKGCONFIG=YES ..

$ make -j4
$ sudo make install
$ sudo ldconfig


$ opencv_version -v

安裝於 python 的 virtualenv 中
nano@nano-desktop:~/envs/py3cv4/lib/python3.6/site-packages$ ln -s /usr/lib/python3/dist-packages/cv2/python-3.6/cv2.cpython-36m-aarch64-linux-gnu.so cv2.so


在 mainwindow.cpp 檔內增加
#include <opencv2/opencv.hpp>
#include <opencv2/cudaimgproc.hpp>


SSD 測試
(py3cv4) nano@nano-desktop:~/Data/opencv/opencv-dnn-gpu-examples$ python opencv-ssd-cuda/ssd_object_detection.py --prototxt opencv-ssd-cuda/MobileNetSSD_deploy.prototxt --model opencv-ssd-cuda/MobileNetSSD_deploy.caffemodel --input example_videos/guitar.mp4 --display 0 --use-gpu=1
[INFO] setting preferable backend and target to CUDA...
[INFO] accessing video stream...
[INFO] elasped time: 34.94
[INFO] approx. FPS: 7.07
(py3cv4) nano@nano-desktop:~/Data/opencv/opencv-dnn-gpu-examples$ python opencv-ssd-cuda/ssd_object_detection.py --prototxt opencv-ssd-cuda/MobileNetSSD_deploy.prototxt --model opencv-ssd-cuda/MobileNetSSD_deploy.caffemodel --input example_videos/guitar.mp4 --display 0
[INFO] accessing video stream...
[INFO] elasped time: 81.08
[INFO] approx. FPS: 3.05

Yolo 測試
(py3cv4) nano@nano-desktop:~/Data/opencv/opencv-dnn-gpu-examples$ python3 opencv-yolo-cuda/yolo_object_detection.py -y opencv-yolo-cuda/yolo-coco -i example_videos/guitar.mp4 --display=0 --use-gpu=0
[INFO] loading YOLO from disk...
[INFO] setting preferable backend and target to CUDA...
[INFO] accessing video stream...
[INFO] elasped time: 212.71
[INFO] approx. FPS: 1.16
(py3cv4) nano@nano-desktop:~/Data/opencv/opencv-dnn-gpu-examples$ python3 opencv-yolo-cuda/yolo_object_detection.py -y opencv-yolo-cuda/yolo-coco -i example_videos/guitar.mp4 --display=0
[INFO] loading YOLO from disk...
[INFO] accessing video stream...
[INFO] elasped time: 1275.95
[INFO] approx. FPS: 0.19

Mask R-CNN 測試
(py3cv4) nano@nano-desktop:~/Data/opencv/opencv-dnn-gpu-examples$ python3 opencv-mask-rcnn-cuda/mask_rcnn_segmentation.py --mask-rcnn opencv-mask-rcnn-cuda/mask-rcnn-coco/ --input example_videos/guitar.mp4 --display=0 --use-gpu=1
[INFO] loading Mask R-CNN from disk...
[INFO] setting preferable backend and target to CUDA...
[INFO] accessing video stream...
[INFO] elasped time: 445.18
[INFO] approx. FPS: 0.55
(py3cv4) nano@nano-desktop:~/Data/opencv/opencv-dnn-gpu-examples$ python3 opencv-mask-rcnn-cuda/mask_rcnn_segmentation.py --mask-rcnn opencv-mask-rcnn-cuda/mask-rcnn-coco/ --input example_videos/guitar.mp4 --display=0
[INFO] loading Mask R-CNN from disk...
[INFO] accessing video stream...
[INFO] elasped time: 4512.29
[INFO] approx. FPS: 0.05