網頁

2020年10月28日 星期三

在 DeepStream 上使用 Yolo v4

參考 Using YOLOv4 on NVIDIA DeepStream 5.0
darknet 下載 yolov4.cfg, yolov4.weights, yolov4-tiny.cfg, yolov4-tiny.weights
可修改 cfg 檔的 width, height

$ pip3 install torch
$ pip3 install torchvision
$ git clone https://github.com/Tianxiaomo/pytorch-YOLOv4.git
$ cd pytorch-YOLOv4
$ pip3 install onnxruntime
$ python3 demo_darknet2onnx.py yolov4.cfg yolov4.weights ./data/giraffe.jpg 1
產生 yolov4_1_3_416_416_static.onnx
/usr/src/tensorrt/bin/trtexec --onnx=yolov4_1_3_416_416_static.onnx \
--explicitBatch --saveEngine=yolov4_1_3_416_416_fp16.engine \
--workspace=4096 --fp16
產生 yolov4_1_3_416_416_fp16.engine
$ cd ..
$ git clone https://github.com/NVIDIA-AI-IOT/yolov4_deepstream
$ cd yolov4_deepstream
$ sudo cp -r deepstream_yolov4 /opt/nvidia/deepstream/deepstream-5.0/sources
$ cd /opt/nvidia/deepstream/deepstream-5.0/sources
$ sudo chown user.group deepstream_yolov4
$ cd deepstream_yolov4/nvdsinfer_custom_impl_Yolo/
$ make
$ cd ..
拷貝 yolov4_1_3_416_416_fp16.engine 到此
修改 config_infer_primary_yoloV4.txt
model-engine-file=yolov4-tiny_1_3_416_416_fp16.engine

修改 deepstream_app_config_yoloV4.txt
# 不存檔
[sink0]
enable=0
# 螢幕顯示
[sink1]
enable=1
type=2
sync=0
display-id=0
offset-x=0
offset-y=0
width=0
height=0
overlay-id=1
source-id=0
# 不要在此使用 model-engine-file
[primary-gie]
enable=1
#model-engine-file=yolov4_1_3_320_320_fp16.engine
# 開啟 tracker
[tracker]
enable=1

執行
$ deepstream-app -c deepstream_app_config_yoloV4.txt 




安裝 TensorRT

參考 NVIDIA TensorRT Documention

到網址 https://developer.nvidia.com/tensorrt 按下 Download
輸入帳號密碼
選擇 TensorRT 7.2.1
選擇 TensorRT 7.2.1 for Linux and CUDA 10.2
選擇 TensorRT 7.2.1 for Ubuntu 18.04 and CUDA 10.2 DEB local repo package

$ os="ubuntu1804"
$ tag="cuda10.2-trt7.2.1.6-ga-20201006"
$ sudo dpkg -i nv-tensorrt-repo-${os}-${tag}_1-1_amd64.deb
$ sudo apt-key add /var/nv-tensorrt-repo-${tag}/7fa2af80.pub
$ sudo apt-get update
$ sudo apt-get install tensorrt
$ sudo apt-get install python-libnvinfer-dev
$ sudo apt-get install python3-libnvinfer-dev
$ sudo apt-get install uff-converter-tf
$ sudo apt-get install onnx-graphsurgeon
查詢是否安裝成功
$ dpkg -l | grep TensorRT


DeepStream 使用 ONNX 模型

參考 How to deploy ONNX models on NVIDIA Jetson Nano using DeepStream
到 ONNX model zoo 找尋 Tiny YOLOv2 model 下載 tinyyolov2-8.onnx (Opset version 8)

Tiny YOLOv2 解說
  Input Shape (1x3x416x416)
  Output Shape (1x125x13x13)
    125 分成 5 個 BBox, 每個有 25 cells(32 bits)
    前 5 個是 bounding box, 後 20 是 class probabilities

$ git clone https://github.com/thatbrguy/Deep-Stream-ONNX.git
$ cd Deep-Stream-ONNX
$ mv ~/Downloads/tinyyolov2-8.onnx .
下載 sample.tar.gz 並解壓縮

$ vi config/config_infer_custom_yolo.txt
onnx-file=../tinyyolov2-8.onnx
parse-bbox-func-name=NvDsInferParseCustomYoloV2Tiny
custom-lib-path=../custom_bbox_parser/libnvdsinfer_custom_bbox_tiny_yolo.so

$ vi custom_bbox_parser/Makefile
SRCFILES:=nvdsparsebbox_tiny_yolo.cpp
TARGET_LIB:=libnvdsinfer_custom_bbox_tiny_yolo.so
DEEPSTREAM_PATH:=/opt/nvidia/deepstream/deepstream-5.0

$ vi custom_bbox_parser/nvdsparsebbox_tiny_yolo.cpp
    //assert (layer.dims.numDims == 3); 修改
    assert (layer.inferDims.numDims == 3);

extern "C" bool NvDsInferParseCustomYoloV2Tiny(
    std::vector<NvDsInferLayerInfo> const& outputLayersInfo,
    NvDsInferNetworkInfo const& networkInfo,
    NvDsInferParseDetectionParams const& detectionParams,
    std::vector<NvDsInferParseObjectInfo>& objectList);
outputLayersInfo: 為 ONNX 的輸出,  outputLayersInfo[0].buffer
networkInfo: 為 ONNX 模型的資訊
detectionParams: 有 config params 的資料,如 numClassesConfigured
objectList: 是我們要產生的輸出

測試
$ deepstream-app -c config/deepstream_app_custom_yolo.txt

2020年10月26日 星期一

Rtsp service on jetson nano

下載 test-launch.c v1.18
安裝必須套件
$ sudo apt-get install libgstrtspserver-1.0 libgstreamer1.0-dev
編譯
$ gcc test-launch.c -o test-launch $(pkg-config --cflags --libs gstreamer-1.0 gstreamer-rtsp-server-1.0)
若出現
undefined reference to `gst_rtsp_media_factory_set_enable_rtcp'
編譯失敗,刪除相關程式碼即可

測試啟動 service
$ ./test-launch "videotestsrc ! omxh265enc ! rtph265pay name=pay0 pt=96"
開啟畫面 (Jetson 平台)
$ gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! application/x-rtp, media=video ! queue ! decodebin ! nvvidconv ! nveglglessink
開啟畫面 (一般平台)
$ gst-launch-1.0 rtspsrc location=rtsp://192.168.0.105:8554/test ! application/x-rtp, media=video ! queue ! decodebin ! nvvideoconvert ! nveglglessink
使用 CSI 攝影機
$ ./test-launch "nvarguscamerasrc ! \
video/x-raw(memory:NVMM), format=NV12, width=640, height=480, framerate=30/1 ! \
nvvidconv ! video/x-raw ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"

2020年10月23日 星期五

DeepStream 安裝

安裝必要 packages
$ sudo apt install \
    libssl1.0.0 \
    libgstreamer1.0-0 \
    gstreamer1.0-tools \
    gstreamer1.0-plugins-good \
    gstreamer1.0-plugins-bad \
    gstreamer1.0-plugins-ugly \
    gstreamer1.0-libav \
    libgstrtspserver-1.0-0 \
    libjansson4

安裝驅動 參考 nvidia driver install on ubuntu
安裝 CUDA Toolkit 10.2
先到 https://developer.nvidia.com/cuda-downloads
尋找底下 Resources 的 Archive of Previous CUDA Releases
選擇 CUDA Toolkit 10.2
選擇 Linux/x86_64/Ubuntu/18.04/deb(local)
依據指令安裝,若已經安裝需要先移除
$ sudo apt-get remove cuda
$ sudo apt-cache madison cuda
$ sudo apt-get install cuda=10.2.89-1

安裝 TensorRT 7.0 https://developer.nvidia.com/nvidia-tensorrt-download
安裝 librdkafka
$ git clone https://github.com/edenhill/librdkafka.git
$ cd librdkafka
$ git reset --hard 7101c2310341ab3f4675fc565f64f0967e135a6a
$ ./configure
$ make
$ sudo make install
$ sudo mkdir -p /opt/nvidia/deepstream/deepstream-5.0/lib 
$ sudo cp /usr/local/lib/librdkafka* /opt/nvidia/deepstream/deepstream-5.0/lib

下載 DeepStreamSDK
https://developer.nvidia.com/deepstream-getting-started
$ sudo apt-get install ./deepstream-5.0_5.0.0-1_amd64.deb
$ sudo tar -xvf deepstream_sdk_v5.0.0_x86_64.tbz2 -C /
$ cd /opt/nvidia/deepstream/deepstream-5.0/
$ sudo ./install.sh
$ sudo ldconfig

若是出現 no such element factory 錯誤, 無法建立 Gst elements
$ rm $HOME/.cache/gstreamer-1.0/registry.x86_64.bin



nvidia driver install on ubuntu

查詢可用 driver
$ ubuntu-drivers devices
$ sudo ubuntu-drivers autoinstall
安裝可能失敗,查看有無(Depends)相依元件,手動安裝
$ sudo apt install 相依元件
若還有相依元件,就繼續安裝,最後
$ sudo ubuntu-drivers autoinstall
重新開機
$ sudo reboot

查看版本
$ nvidia-smi

2020年10月22日 星期四

戴口罩辨識

參考 face_mask_detection
參考 Nvidia TLT(Transfer Learning Toolkit) 轉移學習工具 安裝所需工具

參考資料集
  戴口罩
  無口罩
下載後依據目錄格式擺放

$ docker run --name face-mask --runtime=nvidia --gpus all -it -v /home/user/Data/DeepStreamer/tlt-experiments/:/workspace/tlt-experiments -p 8888:8888 nvcr.io/nvidia/tlt-streamanalytics:v2.0_py3 /bin/bash
root@0aa10155a3bd:/workspace/tlt-experiments# exit
exit
$ docker ps
CONTAINER ID        IMAGE                                         COMMAND                  CREATED             STATUS              PORTS                    NAMES
0aa10155a3bd        nvcr.io/nvidia/tlt-streamanalytics:v2.0_py3   "install_ngc_cli.sh …"   8 minutes ago       Up 6 minutes        0.0.0.0:8888->8888/tcp   face-mask
$ docker exec -it face-mask /bin/bash
# cd tlt-experiments/face-mask-detection
# python -m pip install -r requirements.txt
修改 face-mask-detection/tlt_specs/detectnet_v2_tfrecords_kitti_trainval.txt
root_directory_path: "/workspace/tlt-experiments/face-mask-detection-data/kitti/train"
修改 face-mask-detection/tlt_specs/detectnet_v2_train_resnet18_kitti.txt
tfrecords_path: "/workspace/tlt-experiments/face-mask-detection-data/kitti/tfrecords/kitti_trainval/*"
image_directory_path: "/workspace/tlt-experiments/face-mask-detection-data/kitti/train"
pretrained_model_file: "/workspace/tlt-experiments/detectnet_v2/pretrained_resnet18/tlt_pretrained_detectnet_v2_vresnet18/resnet18.hdf5"
修改 face-mask-detection/tlt_specs/detectnet_v2_retrain_resnet18_kitti.txt
tfrecords_path: "/workspace/tlt-experiments/face-mask-detection-data/kitti/tfrecords/kitti_trainval/*"
mage_directory_path: "/workspace/tlt-experiments/face-mask-detection-data/kitti/train"
pretrained_model_file: "/workspace/tlt-experiments/face-mask-detection-data/detectnet_v2/experiment_dir_pruned/resnet18_nopool_bn_detectnet_v2_pruned.tlt"
修改 face-mask-detection/tlt_specs/detectnet_v2_inference_kitti_tlt.txt
model: "/workspace/tlt-experiments/detectnet_v2/experiment_dir_retrain/weights/resnet18_detector_pruned.tlt"
修改 face-mask-detection/tlt_specs/detectnet_v2_inference_kitti_etlt.txt
trt_engine: "/workspace/tlt-experiments/detectnet_v2/experiment_dir_final_unpruned/resnet18_detector.trt"

產生 kitti 格式的標註
# vi data2kitti.sh
/usr/bin/python data2kitti.py --kaggle-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/Kaggle Medical Mask Dataset/medical-masks-dataset' --mafa-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/MAFA Dataset' --fddb-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/FDDB Dataset' --widerface-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/Wider Face Dataset' --kitti-base-path '/workspace/tlt-experiments/face-mask-detection-data/kitti' --train
/usr/bin/python data2kitti.py --kaggle-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/Kaggle Medical Mask Dataset/medical-masks-dataset' --mafa-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/MAFA Dataset' --fddb-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/FDDB Dataset' --widerface-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/Wider Face Dataset' --kitti-base-path '/workspace/tlt-experiments/face-mask-detection-data/kitti' --val
# ./data2kitti.sh
# jupyter notebook --ip 0.0.0.0 --allow-root
    To access the notebook, open this file in a browser:
        file:///root/.local/share/jupyter/runtime/nbserver-241-open.html
    Or copy and paste one of these URLs:
        http://0aa10155a3bd:8888/?token=29cbb63890972635e00320fe6c66bda30b2c81daa7400f3d
     or http://127.0.0.1:8888/?token=29cbb63890972635e00320fe6c66bda30b2c81daa7400f3d
用瀏覽器開啟 http://127.0.0.1:8888/?token=29cbb63890972635e00320fe6c66bda30b2c81daa7400f3d
進入 http://192.168.0.101:8888/notebooks/face-mask-detection.ipynb
修改下列環境變數到正確目錄
# Setting up env variables for cleaner command line commands.
%env KEY=tlt_encode
# User directory
%env USER_EXPERIMENT_DIR=/workspace/tlt-experiments/detectnet_v2
# Download directory
%env DATA_DOWNLOAD_DIR=/workspace/tlt-experiments/face-mask-detection-data/kitti
%env SPECS_DIR=/workspace/tlt-experiments/face-mask-detection/tlt_specs

產生 tfrecords
!tlt-dataset-convert -d $SPECS_DIR/detectnet_v2_tfrecords_kitti_trainval.txt \
                     -o $DATA_DOWNLOAD_DIR/tfrecords/kitti_trainval/kitti_trainval

查詢 tlt_pretrained_detectnet_v2
!ngc registry model list nvidia/tlt_pretrained_detectnet_v2:*
!mkdir -p $USER_EXPERIMENT_DIR/pretrained_resnet18/
下載 tlt_pretrained_detectnet_v2
!ngc registry model download-version nvidia/tlt_pretrained_detectnet_v2:resnet18 \
    --dest $USER_EXPERIMENT_DIR/pretrained_resnet18

訓練開始
!tlt-train detectnet_v2 -e $SPECS_DIR/detectnet_v2_train_resnet18_kitti.txt \
                        -r $USER_EXPERIMENT_DIR/experiment_dir_unpruned \
                        -k $KEY \
                        -n resnet18_detector

測試訓練結果
!tlt-evaluate detectnet_v2 -e $SPECS_DIR/detectnet_v2_train_resnet18_kitti.txt\
                           -m $USER_EXPERIMENT_DIR/experiment_dir_unpruned/weights/resnet18_detector.tlt \
                           -k $KEY

修剪模型, -ph (threshold), 越大產生的模型越小,推論速度越快,但會不準
!mkdir -p $USER_EXPERIMENT_DIR/experiment_dir_pruned
!tlt-prune -m $USER_EXPERIMENT_DIR/experiment_dir_unpruned/weights/resnet18_detector.tlt \
           -o $USER_EXPERIMENT_DIR/experiment_dir_pruned/resnet18_nopool_bn_detectnet_v2_pruned.tlt \
           -eq union \
           -pth 0.7 \
           -k $KEY

再次訓練修剪過的模型
!tlt-train detectnet_v2 -e $SPECS_DIR/detectnet_v2_retrain_resnet18_kitti.txt \
                        -r $USER_EXPERIMENT_DIR/experiment_dir_retrain \
                        -k $KEY \
                        -n resnet18_detector_pruned

測試再次訓練的模型
!tlt-evaluate detectnet_v2 -e $SPECS_DIR/detectnet_v2_retrain_resnet18_kitti.txt \
                           -m $USER_EXPERIMENT_DIR/experiment_dir_retrain/weights/resnet18_detector_pruned.tlt \
                           -k $KEY

利用模型推論測試的圖形
!tlt-infer detectnet_v2 -e $SPECS_DIR/detectnet_v2_inference_kitti_tlt.txt \
                        -o $USER_EXPERIMENT_DIR/tlt_infer_testing \
                        -i $DATA_DOWNLOAD_DIR/test/images \
                        -k $KEY

輸出可被 DeepStream 使用的模型
!mkdir -p $USER_EXPERIMENT_DIR/experiment_dir_final
# Removing a pre-existing copy of the etlt if there has been any.
import os
output_file=os.path.join(os.environ['USER_EXPERIMENT_DIR'],
                         "experiment_dir_final/resnet18_detector_thermal.etlt")
if os.path.exists(output_file):
    os.system("rm {}".format(output_file))
!tlt-export detectnet_v2 \
            -m $USER_EXPERIMENT_DIR/experiment_dir_retrain/weights/resnet18_detector_pruned.tlt \
            -o $USER_EXPERIMENT_DIR/experiment_dir_final/resnet18_detector_thermal.etlt \
            -k $KEY

修改 ds_configs/config_infer_primary_masknet_gpu.txt
tlt-encoded-model=/home/nano/Data/DeepStream/face-mask-detection/experiment_dir_final/resnet18_detector_thermal.etlt
# 不要用 model-engine-file
input-dims=3;480;640;0

修改 ds_configs/deepstream_app_source1_camera_masknet_gpu.txt
width=640
height=480

完成
$ deepstream-app -c deepstream_app_source1_camera_masknet_gpu.txt


2020年10月20日 星期二

Nvidia TLT(Transfer Learning Toolkit) 轉移學習工具


Transfer Learning 除了可以減少訓練時間,也可減少訓練所需的資料和標註
TLT 使用 docker 技術,可參考 Docker 安裝和使用
TLT 預先訓練一些模型,透過 NGC 下載,參考 NVIDIA GPU Cloud (NGC)
TLT 可以使用 docker 下的 Jupyter notebooks
TLT 產出的模型 .etlt 格式,可以在 DeepStream 和 TensorRT 下使用
Pruning 修剪模型中較不重要的 nodes, 減少記憶體的使用,加快推理
Augmenting Dataset 增強資料集,顏色變換,模糊,旋轉,縮放,翻轉,裁剪

TrafficCamNet 和 DashCamNet 都用於交通
TrafficCamNet 用於固定點,DashCamNet 用於移動的車上

PeopleNet 偵測 人,包包,臉
FaceDetectIR 專注於人臉

Transfer Learning Toolkit
下載預先訓練的模型
準備訓練和測試資料集
訓練模型
測試模型
(Prune)修剪模型
輸出模型

安裝步驟
確認 NVIDIA GPU driver 版本,v410.xx 以上,可使用
$ nvidia-smi
安裝 NVIDIA Docker 參考 nvidia-docker 和 Installing Docker CE on Ubuntu
$ curl https://get.docker.com | sh
開機自動執行 docker service
$ sudo systemctl start docker && sudo systemctl enable docker
新增 apt-key
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
$ sudo apt-get update
$ sudo apt-get install -y nvidia-docker2
$ sudo systemctl restart docker
測試是否安裝完成
$ sudo docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
參考 NVIDIA GPU Cloud (NGC),取得 API Key
$ docker login nvcr.io
Username: $oauthtoken
Password: YOUR_NGC_API_KEY
目前 TLT version 為 v2.0_py3
下載 TLT
$ docker pull nvcr.io/nvidia/tlt-streamanalytics:<version>
執行 TLT
$ docker run --runtime=nvidia -it nvcr.io/nvidia/tlt-streamanalytics:<version> /bin/bash
利用目錄對映,存取本地資料
$ docker run --runtime=nvidia -it -v /home/<username>/tlt-experiments:/workspace/tlt-experiments nvcr.io/nvidia/tlt-streamanalytics:<version> /bin/bash
使用網路 port 對映
$ docker run --runtime=nvidia -it -v /home/<username>/tlt-experiments:/workspace/tlt-experiments -p 8888:8888 nvcr.io/nvidia/tlt-streamanalytics:<version>
執行 jupyter notebook
$ cd examples/
$ jupyter notebook --ip 0.0.0.0 --allow-root

設定 NGC API Key
$ ngc config set
顯示 NGC 上的模組列表
$ ngc registry model list <model_glob_string>
例如:
$ ngc registry model list nvidia/tlt_pretrained_*
顯示 tlt_pretrained_detectnet_v2 的各種版本
$ ngc registry model list nvidia/tlt_pretrained_detectnet_v2:*
下載模組
$ ngc registry model download-version <ORG/model_name:version> -dest <path_to_download_dir>
例如:
$ ngc registry model download-version nvidia/tlt_pretrained_classification:resnet18 --dest $USER_EXPERIMENT_DIR/pretrained_resnet18

Object Detection 物件偵測,label 檔使用 KITTI 格式
但只使用了 Class names 和 Bounding box



2020年10月19日 星期一

NVIDIA GPU Cloud (NGC)

參考 NVIDIA GPU CLOUD DOCUMENT

註冊, 啟動你的 NGC 帳號
1. 開啟網址 NGC Catalog landing page, 從右上角 Welcome Guest, 選擇 Create an Account
2. 輸入名字,郵件地址,同意使用條款,按下 Sign Up,收取信件,依據指示設定帳號
3. 依據郵件,開啟設定密碼網頁
4. 依據指示設定密碼,每次可由 https://ngc.nvidia.com/signin/email 網頁輸入郵件地址和密碼登入
5. 同意使用條款

產生 NGC API Key
1. 登入 https://ngc.nvidia.com/signin/email
2. 從右上角的 使用者帳號, 選擇 Setup
3. 按下 Get API Key
4. 按下 Confirm 產生密鑰
密鑰產生一遍即可,新產生的密鑰,會使舊的失效

安裝 NGC Registry CLI(command-line interfaces)
1. 登入 NGC website
2. 從右上角的 使用者帳號, 選擇 Setup,
3. 按下 Install NGC CLI 的 Downloads
4. 選擇安裝平台,依據命令安裝, 如下
AMD64 Linux
下載
$ wget -O ngccli_cat_linux.zip https://ngc.nvidia.com/downloads/ngccli_cat_linux.zip && unzip -o ngccli_cat_linux.zip && chmod u+x ngc
檢查下載檔案
$ md5sum -c ngc.md5
加入命令到環境變數 PATH
$ echo "export PATH=\"\$PATH:$(pwd)\"" >> ~/.bash_profile && source ~/.bash_profile
設定 API Key
$ ngc config set

ngc 命令說明
顯示版本
$ ngc --version
顯示可用選項
$ ngc -h
顯示 registry image 的選項
$ ngc registry image -h
顯示 registry image info 的選項
$ ngc registry image info -h


2020年10月16日 星期五

DeepStream performance 測量

1. 直接取得時間
gst_pad_add_probe (pad, GST_PAD_PROBE_TYPE_BUFFER, GstPadProbeCallback, NULL, NULL);

GstPadProbeReturn *GstPadProbeCallback(GstPad *pad, GstPadProbeInfo *info, gpointer user_data)
{
    GstBuffer *buf = (GstBuffer *) info->data;
    if (GST_BUFFER_DTS_IS_VALID(buf)) {
        GstClockTime dts = GST_BUFFER_DTS(buf);
        g_print("dts=%ldms ", dts/1000000);
    }
    if (GST_BUFFER_PTS_IS_VALID(buf)) {
        GstClockTime pts = GST_BUFFER_PTS(buf);
        g_print("pts=%ldms ", pts/1000000);
    }
    if (GST_BUFFER_DURATION_IS_VALID(buf)) {
        GstClockTime duration = GST_BUFFER_DURATION(buf);
        g_print("duration=%ldms ", duration/1000000);
    }
    g_print("\n");
}

GStreamer 的 textoverlay & timeoverlay

使用 textoverlay 顯示文字
$ gst-launch-1.0 -v videotestsrc ! textoverlay text="Room A" valignment=top halignment=left font-desc="Sans, 72" ! nvvideoconvert ! nveglglessink

使用 textoverlay 顯示字幕,timeoverlay 顯示時間
字幕檔
$ cat subtitles.srt 
 1
 00:00:03,000 --> 00:00:05,000
 Hello? (3-5s)

 2
 00:00:08,000 --> 00:00:13,000
 Yes, this is a subtitle. Don't
 you like it? (8-13s)

 3
 00:00:18,826 --> 00:01:02,886
 Uh? What are you talking about?
 I don't understand  (18-62s)

$ gst-launch-1.0 -v filesrc location=subtitles.srt ! subparse ! txt.   videotestsrc ! timeoverlay font-desc="Sans, 24" ! textoverlay name=txt shaded-background=yes font-desc="Sans, 24" ! nvvideoconvert ! nveglglessink

2020年10月15日 星期四

DeepStream 之 nvdsosd

nvosd = gst_element_factory_make ("nvdsosd", "nv-onscreendisplay");

不要顯示物件框框
g_object_set (G_OBJECT (nvosd), "display-bbox", 0, NULL);

不要顯示物件說明
g_object_set (G_OBJECT (nvosd), "display-text", 0, NULL);

左上角顯示時間
g_object_set (G_OBJECT (nvosd), "display-clock", 1, NULL);

osd_sink_pad = gst_element_get_static_pad (nvosd, "sink");
gst_pad_add_probe (osd_sink_pad, GST_PAD_PROBE_TYPE_BUFFER,
GstPadProbeCallback, (gpointer)sink, NULL);
gst_object_unref (osd_sink_pad);

static GstPadProbeReturn GstPadProbeCallback(
  GstPad * pad, GstPadProbeInfo * info, gpointer u_data)
{
  GstBuffer *buf = (GstBuffer *) info->data;
  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);
  for (l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next) 
  {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
    for (l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next) 
    {
      obj_meta = (NvDsObjectMeta *) (l_obj->data);
      // NvDsObjectMeta 宣告於 sources/includes/nvdsmeta.h
      // class_id: Holds the index of the object class inferred by the primary detector/classifier.
      // object_id: Holds a unique ID for tracking the object.
      //   UNTRACKED_OBJECT_ID indicates that the object has not been tracked.
      // rect_params: 物件的框框, 有可能來自 tracker
      // text_params: 物件說明 "主分類 track 類別1 類別2..."
      // obj_label: 主分類
      for (GList *l = obj_meta->classifier_meta_list; l != NULL; l = l->next)
      {
        NvDsClassifierMeta *classifierMeta = (NvDsClassifierMeta *) (l->data);
        // NvDsClassifierMeta 宣告於 sources/includes/nvdsmeta.h
        // unique_component_id: 分類 id
        NvDsLabelInfo *pLabel = get_possible_label(classifierMeta);
      }
    }
  }
}

NvDsLabelInfo *get_possible_label (NvDsClassifierMeta * classifierMeta)
{
  NvDsLabelInfo *pLabel = NULL;
  for (GList *n = classifierMeta->label_info_list; n != NULL; n = n->next) {
    NvDsLabelInfo *labelInfo = (NvDsLabelInfo *) (n->data);
    // NvDsLabelInfo 宣告於 sources/includes/nvdsmeta.h
    // result_class_id: 最佳的 分類id
    // result_prob: 可能性
    // result_label: 分類名
    // pResult_label: 當 result_label 太長, 放不下時, 放這裡
    if (pLabel == NULL) {
        pLabel = labelInfo;
    } else if (labelInfo->result_prob > pLabel->result_prob) {
        pLabel = labelInfo;
    }
  }
  return pLabel;
}

2020年10月14日 星期三

deepstream-test4-app 學習紀錄

參考 deepstream-test5 學習紀錄

啟動 zookeeper
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/zookeeper-server-start.sh config/zookeeper.properties

啟動 kafka
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/kafka-server-start.sh config/server.properties

接收觸發事件
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic ObjDetect

$ ./deepstream-test4-app -i /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 -p /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_kafka_proto.so --conn-str="localhost;9092;ObjDetect"


2020年10月13日 星期二

GStreamer 學習紀錄 gst-launch-1.0

利用 gst-launch-1.0 同時撥放影像和聲音

$ gst-launch-1.0 filesrc location="/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4" ! qtdemux name=demux demux.audio_0 ! queue ! avdec_aac ! audioconvert ! alsasink demux.video_0 ! queue ! h264parse ! omxh264dec ! nveglglessink

使用 decodebin
$ gst-launch-1.0 filesrc location="/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4" ! decodebin name=d d. ! nvvidconv ! nveglglessink d. ! queue ! audioconvert ! audioresample ! alsasink

$ gst-launch-1.0 filesrc location="/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4" ! decodebin name=d d.src_0 ! nvvidconv ! nveglglessink d.src_1 ! queue ! audioconvert ! audioresample ! alsasink

使用 uridecodebin
nano@nano-desktop:~$ gst-launch-1.0 uridecodebin uri="file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4" name=d d. ! nvvidconv ! nveglglessink d. ! queue ! audioconvert ! audioresample ! alsasink

只撥放影片
$ gst-launch-1.0 filesrc location="/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4" ! decodebin name=d d.src_0 ! nvvidconv ! nveglglessink

2020年10月7日 星期三

deepstream-test5 學習紀錄

參考 kafka 學習紀錄 (操作)
參考 GStreamer
參考 NVIDIA DeepStream SDK Developer Guide

啟動 zookeeper
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/zookeeper-server-start.sh config/zookeeper.properties

啟動 kafka
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/kafka-server-start.sh config/server.properties

$ vi configs/test5_config_file_src_infer.txt
[sourceX]
# type 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CAMERA_CSI
type=4
uri=rtsp://user:passwd@192.168.0.xxx:554/chID=1&streamType=main&linkType=tcp
smart-record=1
# 可透過下列命令觸發停止錄影
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic Record
# >{"command":"start-recording","sensor":{"id":"0"}}
# >{"command":"stop-recording","sensor":{"id":"0"}}
smart-record=2
# 每10秒觸發一次 start / stop
[sink1]
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
msg-broker-conn-str=localhost;9092
topic=ObjDetect
[message-consumer0]
subscribe-topic-list=Record

好用的除錯訊息,宣告於
deepstream-5.0/sources/apps/apps-common/includes/deepstream_common.h
NVGSTDS_ERR_MSG_V
NVGSTDS_INFO_MSG_V
NVGSTDS_WARN_MSG_V
也可使用 GStreamer 學習紀錄 的列印除錯訊息
也可使用 GLib Reference Manual 的列印除錯訊息

執行命令
nano@nano-desktop:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5a$ ./deepstream-test5-app -c configs/test5_config_file_src_infer.txt 
在執行命令的 terminal 中可用鍵盤下令
h: Print this help
q: Quit
p: Pause
r: Resume

使錄影命令不用填寫時間
vi deepstream-5.0/sources/apps/apps-common/src/deepstream_c2d_msg_util.c
NvDsC2DMsg*
nvds_c2d_parse_cloud_message (gpointer data, guint size)
{
...
    if (startRec) {
      time_t startUtc, endUtc, curUtc;
      const gchar *timeStr;
      if (json_object_has_member (object, "start")) {
...
      } else {
        start = 0;
        NVGSTDS_WARN_MSG_V ("wrong message format, missing 'start' field.");
        //goto error;
      }
...
}

接收觸發事件
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic ObjDetect --from-beginning

發送錄影,停止錄影命令
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic Record
>{"command":"start-recording","sensor":{"id":"0"}}
>{"command":"stop-recording","sensor":{"id":"0"}}

vi deepstream-5.0/sources/apps/apps-common/src/deepstream_source_bin.c
create_camera_source_bin
create_uridecode_src_bin
create_rtsp_src_bin


2020年10月5日 星期一

send external IP from gmail


要申請 GMail 應用程式密碼,需要開啟 兩步驟驗證
選取應用程式時,雖然有郵件,但要選擇 其他

#!/usr/bin/python3
import urllib.request
from ipaddress import ip_address
import smtplib

def GetExternalIP():
    try:
        ip_b = urllib.request.urlopen("https://api.ipify.org").read()
    except:
        return ''
    ip_s = ip_b.decode('utf-8')
    try:
        ip_address(ip_s)
        return ip_s
    except:
        return ''
        
def GetBackupIP():
    try:
        file = open('/tmp/ExternalIp.txt', 'r')
        ip = file.readline()
        file.close()
    except:
        return ''
    return ip

def WriteBackupIP(ip):
    try:
        file = open('/tmp/ExternalIp.txt', 'w')
        file.write(ip)
        file.close()
    except:
        return

def SendMail(ip):
    smtp = smtplib.SMTP('smtp.gmail.com', 587)
    smtp.ehlo()
    smtp.starttls()
    smtp.login('name@gmail.com', 'lkmvemxzkegzrhwqj')
    from_addr = 'name@gmail.com'
    to_addr = 'name@yahoo.com.tw'
    msg = 'Subject:External IP\n' + ip + '\n'
    status = smtp.sendmail(from_addr, to_addr, msg)
    if status == {}:
        print('SendMail success!')
    else:
        print('SendMail fail!')
    smtp.quit

external_ip = GetExternalIP()
print('external ip {}'.format(external_ip))
backup_ip = GetBackupIP()
print('backup ip {}'.format(backup_ip))
if external_ip != '':
    if external_ip != backup_ip:
        SendMail(external_ip)
WriteBackupIP(external_ip)

GStreamer 學習紀錄

playbin 會自動使用 autoaudiosink 和 autovideosink
下列程式碼可使用自己要的 sink

  pipeline =
      gst_parse_launch(
      "playbin uri=file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4",
      NULL);
  GstElement *sink = gst_element_factory_make ("nveglglessink", "sink");
  g_object_set(G_OBJECT(pipeline), "video-sink", sink, NULL);


或者調整 registry rank
    GstRegistry *plugins_register = gst_registry_get();
    GstPluginFeature *nveglglessink = gst_registry_lookup_feature(plugins_register, "nveglglessink");
    if (nveglglessink == NULL) {
      g_printerr("gst_registry_lookup_feature return NULL \n");
      return 0;
    }
    guint nveglglessink_rank = gst_plugin_feature_get_rank(nveglglessink);
    g_print("nveglglessink rank=%u\n", nveglglessink_rank);
    gst_plugin_feature_set_rank(nveglglessink, GST_RANK_PRIMARY + 11);
    gst_object_unref(nveglglessink);

使用 gst-launch-1.0
$ gst-launch-1.0 playbin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm video-sink=nveglglessink