網頁

顯示具有 gstreamer 標籤的文章。 顯示所有文章
顯示具有 gstreamer 標籤的文章。 顯示所有文章

2023年7月18日 星期二

gstreamer fpsdisplaysink videorate

$ export URI=rtsp://root:A1234567@192.168.112.202:554/live1s1.sdp
$ export GST_DEBUG=fpsdisplaysink:5

利用 videotestsrc 測試 fpsdisplaysink
$ gst-launch-1.0 videotestsrc ! 'video/x-raw,width=1280,height=720,framerate=60/1' ! videoconvert ! fpsdisplaysink text-overlay=true

fpsdisplaysink 不使用 text-overlay
$ gst-launch-1.0 rtspsrc location=$URI protocols=tcp+udp ! application/x-rtp, media=video ! decodebin  ! nvvideoconvert ! nvegltransform ! fpsdisplaysink text-overlay=0 video-sink=nveglglessink

fpsdisplaysink 使用 text-overlay
$ gst-launch-1.0 rtspsrc location=$URI protocols=tcp+udp ! application/x-rtp, media=video ! decodebin  ! nvvideoconvert ! fpsdisplaysink text-overlay=1 video-sink=autovideosink

利用 videorate 設定 framerate
$ gst-launch-1.0 rtspsrc location=$URI protocols=tcp+udp ! application/x-rtp, media=video ! decodebin ! nvvideoconvert ! videorate ! video/x-raw,framerate=60/1 ! nvvideoconvert ! fpsdisplaysink text-overlay=1 video-sink=autovideosink

加入 rtpjitterbuffer,  但好像沒用
$ gst-launch-1.0 rtspsrc location=$URI protocols=tcp+udp ! application/x-rtp, media=video ! rtpjitterbuffer latency=0 ! decodebin  ! nvvideoconvert ! fpsdisplaysink text-overlay=1 video-sink=autovideosink

不顯示, 但從 log 中可查看出 fps
$ gst-launch-1.0 rtspsrc location=$URI protocols=tcp+udp ! application/x-rtp, media=video ! decodebin  ! nvvideoconvert ! fpsdisplaysink text-overlay=0 video-sink=fakesink
輸出
:00:02.590692019 1665816 0xffff6001d700 DEBUG         fpsdisplaysink fpsdisplaysink.c:372:display_current_fps:<fpsdisplaysink0> Updated max-fps to 1.102534
0:00:02.590778644 1665816 0xffff6001d700 DEBUG         fpsdisplaysink fpsdisplaysink.c:376:display_current_fps:<fpsdisplaysink0> Updated min-fps to 1.102534


$ gst-launch-1.0 rtspsrc location=$URI protocols=tcp+udp ! application/x-rtp, media=video ! decodebin  ! nvvideoconvert ! videorate ! video/x-raw,framerate=60/1 ! nvvideoconvert ! fpsdisplaysink text-overlay=0 video-sink=fakesink

2022年7月5日 星期二

Ubuntu 18.04 重灌

因為 Ubuntu 18.04 只能安裝 DeepStream 6.0.1
在 Ubuntu 20.04 才能安裝 DeepStream 6.1
所以在 Ubuntu 18.04 上安裝 Docker Ubuntu 20.04
版本對應如下
===========================
DS 6.1
Ubuntu 20.04
GCC 9.4.0
CUDA 11.6.1
cuDNN 8.4.0.27
TRT 8.2.5.1
Display Driver:R510.47.03
GStreamer 1.16.2
OpenCV 4.2.0
deepstream:6.1
===========================
DS 6.0.1
Ubuntu 18.04
GCC 7.3.0
CUDA 11.4.1
cuDNN 8.2+
TRT 8.0.1
Display Driver:R470.63.01
GStreamer 1.14.5
OpenCV 3.4.0
deepstream:6.0.1
===========================

安裝作業系統
BIOS 選擇開機
Install Ubuntu
Installation type 選 Something else
Create partition/Mount point 選擇 /

Settings/Details/Users
Unlock
Automatic Login: ON

更新系統,安裝一些常用套件
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install ssh
$ sudo apt-get install python3-pip

mount nfs
$ sudo apt-get install nfs-common
$ sudo mount -t nfs ip:/share_folder /mount_folder
$ sudo vi /etc/fstab
#中間的空格要使用 tab
ip:/share_folder /mnt/mount_folder nfs defaults,bg 0 0
$ sudo mount -a

vnc
$ sudo apt-get install x11vnc
$ sudo x11vnc -storepasswd
$ sudo chown user.group ~/.vnc/passwd

安裝顯示卡驅動,CUDA 和 CUDNN
$ sudo ubuntu-drivers devices
$ sudo apt-get install nvidia-driver-510

https://developer.nvidia.com/cuda-downloads
選擇 Archive of Previous CUDA Releases
選擇 CUDA Toolkit 11.4.1
選擇 deb(local), deb(network) 不可使用了
$ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin
$ sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600
$ wget https://developer.download.nvidia.com/compute/cuda/11.4.1/local_installers/cuda-repo-ubuntu1804-11-4-local_11.4.1-470.57.02-1_amd64.deb
$ sudo dpkg -i cuda-repo-ubuntu1804-11-4-local_11.4.1-470.57.02-1_amd64.deb
$ sudo apt-key add /var/cuda-repo-ubuntu1804-11-4-local/7fa2af80.pub
$ sudo apt-get update
$ sudo apt-get -y install cuda-11-4
之後再安裝一遍 CUDA Toolkit 11.3.1(因為 TensorRT 8.0.1 需要 CUDA-11-3)
$ sudo apt-get -y install cuda-11-3
之後再安裝一遍 CUDA Toolkit 11.6.1(因為 DeepStream 6.1 需要 CUDA-11-6)
$ sudo apt-get -y install cuda-11-6
$ update-alternatives --display cuda
$ update-alternatives --config cuda


參考 https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html
下載 Local Install for Ubuntu18.04 x86_64(Deb)
$ sudo apt-get install zlib1g
$ sudo dpkg -i cudnn-local-repo-ubuntu1804-8.4.1.50_1.0-1_amd64.deb
$ sudo cp /var/cudnn-local-repo-ubuntu1804-8.4.1.50/cudnn-local-BA71F057-keyring.gpg /usr/share/keyrings/
$ sudo apt-get update
$ apt list -a libcudnn8
$ sudo apt-get install libcudnn8=8.4.1.50-1+cuda11.6
$ sudo apt-get install libcudnn8-dev=8.4.1.50-1+cuda11.6
$ sudo apt-get install libcudnn8-samples=8.4.1.50-1+cuda11.6

安裝 TensorRT 8.0.1
因為使用 Docker 安裝 DeepStream 6.1, 所以不用安裝 TensorRT 8.2.5.1
參考 https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html
TensorRT Archives 切換安裝文件版本
在 3. Downloading TensorRT
連結 https://developer.nvidia.com/tensorrt 按 GET STARTED
連結 https://developer.nvidia.com/tensorrt-getting-started 按 DOWNLOAD NOW
選擇 TensorRT 8
選擇 TensorRT 8.0 GA
選擇 TensorRT 8.0.1 GA for Ubuntu 18.04 and CUDA 11.3 DEB local repo package
$ sudo dpkg -i nv-tensorrt-repo-ubuntu1804-cuda11.3-trt8.0.1.6-ga-20210626_1-1_amd64.deb
$ sudo apt-key add /var/nv-tensorrt-repo-ubuntu1804-cuda11.3-trt8.0.1.6-ga-20210626/7fa2af80.pub
$ sudo apt-get update
$ sudo apt --fix-broken install
$ sudo apt-get upgrade
$ sudo apt-get install tensorrt

安裝 DeepStream
因為不選擇最新的版本
參考 https://docs.nvidia.com/metropolis/deepstream-archive.html
$ sudo apt install libssl1.0.0
$ sudo apt install libgstreamer1.0-0
$ sudo apt install gstreamer1.0-tools
$ sudo apt install gstreamer1.0-plugins-good
$ sudo apt install gstreamer1.0-plugins-bad
$ sudo apt install gstreamer1.0-plugins-ugly
$ sudo apt install gstreamer1.0-libav
$ sudo apt install libgstrtspserver-1.0-0
$ sudo apt install libjansson4
$ sudo apt install gcc
$ sudo apt install make
$ sudo apt install git
$ sudo apt install python3

$ cd /usr/bin
$ sudo ln -s python3 python
$ git clone https://github.com/edenhill/librdkafka.git
$ cd librdkafka
$ git reset --hard 7101c2310341ab3f4675fc565f64f0967e135a6a
$ ./configure
$ make
$ sudo make install
$ sudo mkdir -p /opt/nvidia/deepstream/deepstream-6.0/lib
$ sudo cp /usr/local/lib/librdkafka* /opt/nvidia/deepstream/deepstream-6.0/lib
下載 Deepstream 6.0 dGPU Debian package
https://developer.nvidia.com/deepstream-6.0_6.0.1-1_amd64deb
$ sudo apt-get install ./deepstream-6.0_6.0.1-1_amd64.deb
$ rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
$ cd /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/
$ deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt

安裝  Nvidia Docker
參考 https://docs.docker.com/engine/install/ubuntu/
$ sudo apt-get update
$ sudo apt-get install ca-certificates
$ sudo apt-get install curl
$ sudo apt-get install gnupg
$ sudo apt-get install lsb-release
$ sudo mkdir -p /etc/apt/keyrings
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
$ echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
$ sudo apt-get update
$ sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
$ sudo docker run --rm hello-world

參考 https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
      && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
      && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
            sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
            sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
$ sudo apt-get update
$ sudo apt-get install -y nvidia-docker2
$ sudo systemctl restart docker
$ sudo groupadd docker
$ sudo usermod -a -G docker $USER
$ sudo reboot

安裝 NGC CLI
參考 https://ngc.nvidia.com/setup/installers/cli
$ wget --content-disposition https://ngc.nvidia.com/downloads/ngccli_linux.zip && \
  unzip ngccli_linux.zip && \
  chmod u+x ngc-cli/ngc
$ find ngc-cli/ -type f -exec md5sum {} + | LC_ALL=C sort | md5sum -c ngc-cli.md5
$ echo "export PATH=\"\$PATH:$(pwd)/ngc-cli\"" >> ~/.bash_profile && source ~/.bash_profile
$ ngc config set
# 直接 enter 即可
$ docker login nvcr.io
Username: $oauthtoken
Password: <Your API Key>

使用 Docker 安裝 TensorRT OSS
開啟 https://github.com/nvidia/TensorRT
切換至 Tags 8.0.1
$ git clone -b master https://github.com/nvidia/TensorRT TensorRT_OSS-8.0.1
$ cd TensorRT_OSS-8.0.1/
$ git describe --tags
8.2.0-EA-2-g96e2397
$ git tag -l
$ git branch -r
$ git checkout 8.0.1
$ git log -1
$ git describe --tags
8.0.1
$ git submodule update --init --recursive
$ vi docker/ubuntu-18.04.Dockerfile
修改下列一行
RUN cd /usr/local/bin && wget https://ngc.nvidia.com/downloads/ngccli_cat_linux.zip && \
  unzip ngccli_cat_linux.zip && chmod u+x ngc-cli/ngc && \
  rm ngccli_cat_linux.zip ngc-cli.md5 && echo "no-apikey\nascii\n" | ngc-cli/ngc config set

$ cat docker/ubuntu-18.04.Dockerfile | grep CUDA_VERSION
ARG CUDA_VERSION=11.3.1
$ ./docker/build.sh --file docker/ubuntu-18.04.Dockerfile --tag tensorrt-ubuntu18.04-cuda11.3 --cuda 11.3.1
$ ./docker/launch.sh --tag tensorrt-ubuntu18.04-cuda11.3 --gpus all
/workspace$ cd $TRT_OSSPATH
/workspace/TensorRT$ mkdir -p build && cd build
/workspace/TensorRT/build$ cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out
/workspace/TensorRT/build$ make -j$(nproc)
/workspace/TensorRT/build$ exit
$ mkdir backup
$ sudo mv /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.0.1 backup/
$ sudo cp build/out/libnvinfer_plugin.so.8.0.1 /usr/lib/x86_64-linux-gnu/
另外還要再安裝 TensorRT 8.2.1 給 Docker Ubuntu 20.04
將 8.0.1 改成 8.2.1, 18.04 改成 20.04, 11.3.1 改成 11.4.2, 但不需安裝

用 Docker 開發 DeepStream 6.0.1
參考 https://docs.nvidia.com/metropolis/deepstream/6.0.1/dev-guide/text/DS_docker_containers.html
$ docker pull nvcr.io/nvidia/deepstream:6.0.1-devel
$ xhost +
access control disabled, clients can connect from any host
$ sudo docker run --gpus all -it --rm --net=host \
  -v /tmp/.X11-unix:/tmp/.X11-unix -v /etc/localtime:/etc/localtime \
  -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.0 nvcr.io/nvidia/deepstream:6.0.1-devel
# update-alternatives --display cuda
# cat /etc/os-release
# cd samples/configs/deepstream-app
# deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt 
# cd /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-app
# export CUDA_VER=11.4
# make
# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/nvidia/deepstream/deepstream-6.0/lib/gst-plugins/
# export URI=rtsp://user:passwd@192.168.0.108:554/live1s2.sdp
# gst-launch-1.0 uridecodebin uri=$URI ! nvvideoconvert ! nveglglessink
# exit

用 Docker 開發 DeepStream 6.1
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_docker_containers.html
$ docker pull nvcr.io/nvidia/deepstream:6.1-devel
$ xhost +
access control disabled, clients can connect from any host
$ sudo docker run --gpus all -it --rm --net=host \
  -v /tmp/.X11-unix:/tmp/.X11-unix -v /etc/localtime:/etc/localtime \
  -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.1 nvcr.io/nvidia/deepstream:6.1-devel
# cd samples/configs/deepstream-app
# deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt 
# cd /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-app
# export CUDA_VER=11.6
# make
# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/nvidia/deepstream/deepstream-6.1/lib/gst-plugins/
# export URI=rtsp://user:passwd@192.168.0.108:554/live1s2.sdp
# gst-launch-1.0 uridecodebin uri=$URI ! nvvideoconvert ! nveglglessink
# exit

用 Docker DeepStream 6.0.1 測試 Integrate TAO model with DeepStream SDK
$ git clone https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps.git deepstream_tao_apps-tao3.0_ds6.0.1
$ sudo apt install gitk
$ cd deepstream_tao_apps-tao3.0_ds6.0.1/
$ git branch -r
$ git checkout release/tao3.0_ds6.0.1
$ git log -1
$ sudo docker run --gpus all -it --rm --net=host \
> -v /etc/localtime:/etc/localtime \
> -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY \
> -v /home/mark/Data/TensorRT/TensorRT_OSS-8.0.1/:/home/TensorRT \
> -v /home/mark/Data/DeepStream/deepstream_tap_apps/deepstream_tao_apps-tao3.0_ds6.0.1/:/home/deepstream_tao_apps \
> -w /opt/nvidia/deepstream/deepstream-6.0 nvcr.io/nvidia/deepstream:6.0.1-devel
# cd /home/deepstream_tao_apps/
# ./download_models.sh 
# export CUDA_VER=11.4
# make
# cp /home/TensorRT/build/out/libnvinfer_plugin.so.8.0.1 /usr/lib/x86_64-linux-gnu/
# ./apps/tao_detection/ds-tao-detection -c configs/frcnn_tao/pgie_frcnn_tao_config.txt -i /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.h264 -d
# cd /opt/nvidia/deepstream/deepstream-6.0/sources/gst-plugins/gst-nvdsvideotemplate/
# make
# cp libnvdsgst_videotemplate.so /opt/nvidia/deepstream/deepstream-6.0/lib/gst-plugins/
# rm -rf ~/.cache/gstreamer-1.0/
# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/nvidia/deepstream/deepstream-6.0/lib/cvcore_libs/
# cd /home/deepstream_tao_apps/apps/tao_others/
# make
# export URI=rtsp://user:passwd@192.168.0.108:554/live1s2.sdp
# cd deepstream-bodypose2d-app/
# ./deepstream-bodypose2d-app 3 ../../../configs/bodypose2d_tao/sample_bodypose2d_model_config.txt $URI ./body2dout
# cd ../deepstream-emotion-app/
# ./deepstream-emotion-app 3 ../../../configs/facial_tao/sample_faciallandmarks_config.txt $URI ./landmarks
# cd ../deepstream-faciallandmark-app/
# ./deepstream-faciallandmark-app 3 ../../../configs/facial_tao/sample_faciallandmarks_config.txt $URI ./landmarks
# cd ../deepstream-gaze-app/
# ./deepstream-gaze-app 3 ../../../configs/facial_tao/sample_faciallandmarks_config.txt $URI ./gazenet
# cd ../deepstream-gesture-app/
# ./deepstream-gesture-app 3 3 ../../../configs/bodypose2d_tao/sample_bodypose2d_model_config.txt $URI ./gesture
# cd ../deepstream-heartrate-app/
# ./deepstream-heartrate-app 3 $URI ./heartrate
# exit

用 Docker DeepStream 6.1 測試 Integrate TAO model with DeepStream SDK
$ git clone https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps.git deepstream_tao_apps-tao3.0_ds6.1ga
$ sudo apt install gitk
$ cd deepstream_tao_apps-tao3.0_ds6.1ga/
$ git branch -r
$ git checkout release/tao3.0_ds6.1ga
$ git log -1
$ sudo docker run --gpus all -it --rm --net=host \
> -v /etc/localtime:/etc/localtime \
> -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY \
> -v /home/mark/Data/TensorRT/TensorRT_OSS-8.2.1/:/home/TensorRT \
> -v /home/mark/Data/DeepStream/deepstream_tap_apps/deepstream_tao_apps-tao3.0_ds6.1ga/:/home/deepstream_tao_apps \
> -w /opt/nvidia/deepstream/deepstream-6.1 nvcr.io/nvidia/deepstream:6.1-devel
# cp /home/TensorRT/build/out/libnvinfer_plugin.so.8.2.1 /usr/lib/x86_64-linux-gnu/
# cd /usr/lib/x86_64-linux-gnu/
# rm libnvinfer_plugin.so.8
# ln -s libnvinfer_plugin.so.8.2.1 libnvinfer_plugin.so.8
# cd /home/deepstream_tao_apps/
# ./download_models.sh 
# export CUDA_VER=11.6
# make
# ./apps/tao_detection/ds-tao-detection -c configs/frcnn_tao/pgie_frcnn_tao_config_dgpu.txt -i /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_720p.h264 -d
# cd /opt/nvidia/deepstream/deepstream-6.1/sources/gst-plugins/gst-nvdsvideotemplate/
# make
# cp libnvdsgst_videotemplate.so /opt/nvidia/deepstream/deepstream-6.1/lib/gst-plugins/
# rm -rf ~/.cache/gstreamer-1.0/
# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/nvidia/deepstream/deepstream-6.1/lib/cvcore_libs/
# cd /home/deepstream_tao_apps/apps/tao_others/
# make
# export URI=rtsp://user:passwd@192.168.0.108:554/live1s2.sdp
# cd deepstream-bodypose2d-app/
# ./deepstream-bodypose2d-app 3 ../../../configs/bodypose2d_tao/sample_bodypose2d_model_config.txt 0 0 $URI ./body2dout
# cd ../deepstream-emotion-app/
# ./deepstream-emotion-app 3 ../../../configs/facial_tao/sample_faciallandmarks_config.txt $URI ./landmarks
# cd ../deepstream-faciallandmark-app/
# ./deepstream-faciallandmark-app 3 ../../../configs/facial_tao/sample_faciallandmarks_config.txt $URI ./landmarks
# cd ../deepstream-gaze-app/
# ./deepstream-gaze-app 3 ../../../configs/facial_tao/sample_faciallandmarks_config.txt $URI ./gazenet
# cd ../deepstream-gesture-app/
# ./deepstream-gesture-app 3 3 ../../../configs/bodypose2d_tao/sample_bodypose2d_model_config.txt $URI ./gesture
# cd ../deepstream-heartrate-app/
# ./deepstream-heartrate-app 3 $URI ./heartrate
# exit

安裝其他元件
參考 Ubuntu 18.04 重灌 上的 安裝 CMake 和 安裝 OpenCV


2022年1月22日 星期六

2022年1月14日 星期五

deepstream with python

參考 DeepStream Python Apps
$ git clone https://github.com/NVIDIA-AI-IOT/deepstream_python_apps.git
$ cd deepstream_python_apps/bindings/
$ git submodule update --init
$ cd ../3rdparty/gst-python/
$ ./autogen.sh
$ make
$ make  install
$ cd  ../../bindings/
$ mkdir build
$ cd build
$ cmake ..
$ make
$ cd ..
$ mkdir export_pyds
$ cp build/pyds*.whl export_pyds
$ pip3 install export_pyds/pyds-1.1.0-py3-none-linux_x86_64.whl



2021年12月29日 星期三

Install nvdec and nvh264enc as GStreamer plugins

$ sudo apt-get install gtk-doc-tools
$ git clone -b 1.14.5 https://github.com/GStreamer/gst-plugins-bad.git

$ unzip Video_Codec_SDK_11.1.5.zip
$ cp Video_Codec_SDK_11.1.5/Interface/nvEncodeAPI.h gst-plugins-bad/sys/nvenc/
$ cp Video_Codec_SDK_11.1.5/Interface/cuviddec.h gst-plugins-bad/sys/nvdec/
$ cp Video_Codec_SDK_11.1.5/Interface/nvcuvid.h gst-plugins-bad/sys/nvdec/

$ cd gst-plugins-bad/
$ ./autogen.sh --with-cuda-prefix="/usr/local/cuda"
$ cd sys/nvenc/
$ make
$ sudo cp .libs/libgstnvenc.so /usr/lib/x86_64-linux-gnu/gstreamer-1.0/
$ cd ../nvdec
$ make
$ sudo cp .libs/libgstnvdec.so /usr/lib/x86_64-linux-gnu/gstreamer-1.0/
$ gst-inspect-1.0 nvh264enc
$ gst-inspect-1.0 nvdec


2021年12月17日 星期五

GStreamer 使用 rstp

URI="rtsp://user:password@192.168.1.123:554/cam/realmonitor?channel=1&subtype=0"

使用 playbin
$ gst-launch-1.0 -v playbin uri=$URI
$ gst-launch-1.0 -v playbin uri=$URI uridecodebin0::source::latency=3000::video-sink=autovideosink

使用 uridecodebin
$ gst-launch-1.0 uridecodebin uri=$URI name=d d. ! nvvideoconvert ! nveglglessink
$ gst-launch-1.0 uridecodebin uri=$URI name=d d. ! nvvideoconvert ! nveglglessink d. ! queue ! audioconvert ! audioresample ! autoaudiosink

使用 decodebin
$ gst-launch-1.0 rtspsrc location=$URI ! application/x-rtp, media=video ! decodebin ! nvvideoconvert ! nveglglessink
$ gst-launch-1.0 rtspsrc location=$URI ! application/x-rtp, media=video ! decodebin name=d d. ! queue ! nvvideoconvert ! nveglglessink d. ! queue ! audioconvert ! audioresample ! autoaudiosink

自行編寫
$ gst-launch-1.0 rtspsrc location=$URI ! application/x-rtp, media=video ! queue ! rtph265depay ! h265parse ! nvv4l2decoder ! nvvideoconvert ! nveglglessink
$ gst-launch-1.0 rtspsrc location=$URI name=d d. ! application/x-rtp, media=video ! queue ! rtph265depay ! h265parse ! nvv4l2decoder ! nvvideoconvert ! nveglglessink d. ! queue ! application/x-rtp, media=audio ! rtppcmadepay ! alawdec ! autoaudiosink

存檔
$ gst-launch-1.0 -e rtspsrc location=$URI ! application/x-rtp, media=video ! queue ! rtph265depay ! h265parse ! matroskamux ! filesink location=aa.mkv
存檔並顯示
$ gst-launch-1.0 -e rtspsrc location=$URI ! application/x-rtp, media=video ! queue ! rtph265depay ! h265parse ! tee name=t  t. ! queue ! avdec_h265 ! videoconvert ! autovideosink t. ! queue ! matroskamux ! filesink location=aa.mkv
存檔並顯示(因使用 nvv4l2decoder,所以要在 h265parse 前 tee)
$ gst-launch-1.0 -e rtspsrc location=$URI ! application/x-rtp, media=video ! queue ! rtph265depay ! tee name=t  t. ! queue ! h265parse ! nvv4l2decoder ! nvvideoconvert ! nveglglessink t. ! queue ! h265parse ! matroskamux ! filesink location=aa.mkv

常用
nveglglessink nvoverlaysink autovideosink xvimagesink fakesink
nvvideoconvert videoconvert
nvv4l2decoder avdec_h265

Jetson 使用 nveglglessink 前要加 nvegltransform

gst-inspect-1.0

2021年1月11日 星期一

Ubuntu 18.04 重灌

依據 https://developer.nvidia.com/deepstream-getting-started
選擇各個版本

https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html

sudo apt-get install ssh
sudo apt install python3-pip
sudo ln -s /usr/bin/python3 /usr/bin/python

安裝 NVIDIA Driver
https://www.linuxbabe.com/ubuntu/install-nvidia-driver-ubuntu-18-04
sudo lshw -c display
sudo lshw -c video
sudo ubuntu-drivers devices
sudo ubuntu-drivers autoinstall
sudo reboot
sudo lshw -c display

安裝 CUDA Toolkit
https://developer.nvidia.com/cuda-downloads
請選擇 CUDA Toolkit 10.2
Linux/x86_64/Ubuntu/18.04/deb(local)
vi ~/.bashrc
export PATH=/usr/local/cuda-10.2:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64:$LD_LIBRARY_PATH

安裝 CUDNN
https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html
https://developer.nvidia.com/cudnn
選擇 cuDNN v8.0.5 for CUDA 10.2
選擇 cuDNN Library for Linux (x86)
tar -xzvf cudnn-10.2-linux-x64-v8.0.5.39.tgz
sudo cp cuda/include/cudnn*.h /usr/local/cuda/include
sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64
sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*
sudo ldconfig
/sbin/ldconfig.real: /usr/local/cuda-10.2/targets/x86_64-linux/liblibcudnn.so.8 is not a symbolic link
cd /usr/local/cuda-10.2/targets/x86_64-linux/lib
sudo rm liblibcudnn.so.8 liblibcudnn.so
sudo ln -s libcudnn.so.8.0.5 libcudnn.so.8
sudo ln -s libcudnn.so.8 libcudnn.so
sudo ldconfig

安裝 TensorRT
下載 TensorRT binary
https://developer.nvidia.com/nvidia-tensorrt-7x-download
選擇 TensorRT 7.1.3.4 for Ubuntu 18.04 and CUDA 10.2 TAR package
version="7.1.3.4"
os="Ubuntu-18.04"
arch=$(uname -m)
cuda="cuda-10.2"
cudnn="cudnn8.0"
tar -xvzf TensorRT-7.1.3.4.Ubuntu-18.04.x86_64-gnu.cuda-10.2.cudnn8.0.tar.gz
vi ~/.bashrc
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/Data/TensorRT/TensorRT-7.1.3.4/lib
source ~/.bashrc
cd TensorRt-7.1.3.4/python
sudo pip3 install tensorrt-7.1.3.4-cp36-none-linux_x86_64.whl
cd ../uff
sudo pip3 install uff-0.6.9-py2.py3-none-any.whl
cd ../graphsurgeon/
sudo pip3 install graphsurgeon-0.4.5-py2.py3-none-any.whl

安裝 GStreamer
https://yingrenn.blogspot.com/2020/09/gstreamer.html

安裝 DeepStream
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html#dgpu-setup-for-ubuntu
sudo apt install \
 libssl1.0.0 \
 libgstreamer1.0-0 \
 gstreamer1.0-tools \
 gstreamer1.0-plugins-good \
 gstreamer1.0-plugins-bad \
 gstreamer1.0-plugins-ugly \
 gstreamer1.0-libav \
 libgstrtspserver-1.0-0 \
 libgstrtspserver-1.0-dev \
 libjansson4
git clone https://github.com/edenhill/librdkafka.git
cd librdkafka
git reset --hard 7101c2310341ab3f4675fc565f64f0967e135a6a
./configure
make
sudo make install
sudo mkdir -p /opt/nvidia/deepstream/deepstream-5.0/lib
sudo cp /usr/local/lib/librdkafka* /opt/nvidia/deepstream/deepstream-5.0/lib
下載 DeepStream
https://developer.nvidia.com/assets/Deepstream/5.0/ga/secure/deepstream_sdk_5.0.1_x86_64.tbz2
sudo tar -xvf deepstream_sdk_v5.0.1_x86_64.tbz2 -C /
cd /opt/nvidia/deepstream/deepstream-5.0/
sudo ./install.sh
sudo ldconfig

安裝 CMake v3.13
wget http://www.cmake.org/files/v3.13/cmake-3.13.5.tar.gz
tar xpvf cmake-3.13.5.tar.gz cmake-3.13.5
cd cmake-3.13.5
sudo apt-get install zlib1g-dev
sudo apt-get install curl
sudo apt-get install libcurl3
sudo apt-get install libcurl4 libcurl4-openssl-dev
./bootstrap --system-curl
make -j$(nproc)
sudo make install

安裝 Docker
https://docs.docker.com/engine/install/ubuntu/
sudo apt-get update
sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88
sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

安裝 NVIDIA Docker
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker
curl https://get.docker.com | sh \
  && sudo systemctl start docker \
  && sudo systemctl enable docker
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
   && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
   && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-docker2
sudo systemctl restart docker
sudo groupadd docker
sudo usermod -a -G docker $USER
sudo reboot

若 sudo apt update 出現
W: Target CNF (stable/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list:52 and /etc/apt/sources.list.d/docker.list:1
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 4773BD5E130D1D45
sudo rm /etc/apt/sources.list.d/docker.list

安裝 TensorRT 7.1 OSS
https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html
https://github.com/NVIDIA/TensorRT/tree/master
從 master 切換到 release/7.1
下載 TensorRT OSS
TensorRT OSS: 包含 TensorRT plugins, Caffe 和 ONNX parsers 等
git clone -b master https://github.com/nvidia/TensorRT TensorRT
cd TensorRT
git submodule update --init --recursive
參考之前的 TensorRT binary
cd TensorRT-7.1.3.4
export TRT_RELEASE=`pwd`
cd $TRT_SOURCE
./docker/build.sh --file docker/ubuntu.Dockerfile --tag tensorrt-ubuntu --os 18.04 --cuda 10.2
./docker/launch.sh --tag tensorrt-ubuntu --gpus all --release $TRT_RELEASE --source $TRT_SOURCE
cd $TRT_SOURCE
mkdir -p build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_RELEASE/lib -DTRT_OUT_DIR=`pwd`/out
make -j$(nproc)
exit
mkdir backup
mv $TRT_RELEASE/targets/x86_64-linux-gnu/lib/libnvinfer_plugin.so.7.1.3 backup
cp $TRT_SOURCE/build/out/libnvinfer_plugin.so.7.2.1 $TRT_RELEASE/targets/x86_64-linux-gnu/lib/libnvinfer_plugin.so.7.1.3

安裝 OpenCV 4.4
git clone https://github.com/opencv/opencv.git opencv-4.4.0 -b 4.4.0 --depth 1
git clone https://github.com/opencv/opencv_contrib.git opencv_contrib-4.4.0 -b 4.4.0 --depth 1
sudo apt-get update
sudo add-apt-repository ppa:alex-p/tesseract-ocr
sudo apt install tesseract-ocr libtesseract-dev
sudo apt-get install libleptonica-dev
sudo apt-get install qt5-default
sudo apt-get install qtcreator
sudo apt-get install build-essential cmake unzip pkg-config
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libxvidcore-dev libx264-dev
sudo apt-get install libgtk-3-dev
sudo apt-get install libatlas-base-dev gfortran
sudo apt-get install libhdf5-dev
sudo apt-get install python3-dev
mkdir build; cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D WITH_TBB=ON \
-D BUILD_opencv_cudacodec=OFF \
-D ENABLE_FAST_MATH=1 \
-D CUDA_FAST_MATH=1 \
-D WITH_CUDA=ON \
-D WITH_CUBLAS=ON \
-D WITH_LIBV4L=ON \
-D BUILD_opencv_python3=ON \
-D BUILD_opencv_python2=OFF \
-D BUILD_opencv_java=OFF \
-D WITH_V4L=ON \
-D WITH_QT=ON \
-D WITH_OPENGL=ON \
-D WITH_GSTREAMER=ON \
-D WITH_GTK=ON \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D OPENCV_PC_FILE_NAME=opencv.pc \
-D OPENCV_ENABLE_NONFREE=OFF \
-D OPENCV_EXTRA_MODULES_PATH=/your_path_to/opencv/opencv_contrib-4.4.0/modules \
-D BUILD_EXAMPLES=ON \
-D WITH_CUDNN=ON \
-D CUDNN_VERSION="8.0.5" \
-D OPENCV_DNN_CUDA=ON \
-D CUDNN_INCLUDE_DIR=/usr/local/cuda/include \
-D CUDNN_LIBRARY=/usr/local/cuda/lib64/libcudnn.so.8.0.5 \
-D CUDA_ARCH_BIN=7.5 \
../opencv-4.4.0
#-D OPENCV_ENABLE_NONFREE=ON \
#-D OpenGL_GL_PREFERENCE=LEGACY \

make -j$(nproc)
sudo make install
sudo ldconfig
opencv_version -v

下載 Tesseract 的中文字 chi_tra.traineddata, chi_tra_vert.traineddate
https://github.com/tesseract-ocr/tessdata
放至 /usr/share/tesseract-ocr/4.00/tessdata

安裝 Xpra
https://www.xpra.org/trac/wiki/Building/Debian
sudo apt-get install libx11-dev libxtst-dev libxcomposite-dev libxdamage-dev \
 libxkbfile-dev python-all-dev
sudo apt-get install libgtk-3-dev python3-dev python3-cairo-dev python-gi-dev cython3
sudo apt-get install xauth x11-xkb-utils
sudo apt-get install libx264-dev libvpx-dev yasm
sudo apt-get install libnvidia-encode-440
sudo apt-get install libavformat-dev libavcodec-dev libswscale-dev
sudo apt-get install libturbojpeg-dev
sudo apt-get install libwebp-dev
sudo apt-get install uglifyjs brotli libjs-jquery libjs-jquery-ui gnome-backgrounds
sudo apt-get install python3-opengl python3-numpy python3-pil
sudo apt-get install python3-rencode python3-lz4 python3-dbus python3-cryptography \
 python3-netifaces python3-yaml
sudo apt-get install python3-setproctitle python3-xdg python3-pyinotify python3-opencv
sudo apt-get install libpam-dev quilt xserver-xorg-dev xutils-dev xvfb keyboard-configuration
sudo apt-get install python3-kerberos python3-gssapi
sudo apt-get install gstreamer1.0-pulseaudio gstreamer1.0-alsa \
 gstreamer1.0-plugins-base gstreamer1.0-plugins-good \
 gstreamer1.0-plugins-ugly
sudo apt-get install cups-filters cups-common cups-pdf python3-cups
sudo apt-get install openssh-client sshpass python3-paramiko
sudo apt-get install devscripts build-essential lintian debhelper
wget https://www.xpra.org/src/xpra-4.0.4.tar.xz
tar -xf xpra-4.0.6.tar.xz
cd xpra-4.0.6
vi setup.py
#!/usr/bin/env python3
sudo ./setup.py install

安裝 frp
https://yingrenn.blogspot.com/2020/03/frp.html
vi frps.ini
[common]
bind_port = 7000
dashboard_port = 7001
dashboard_user = user
dashboard_pwd = password

vi /etc/systemd/system/frps.service
[Unit]
Description=FRP Server Daemon

[Service]
Type=simple
ExecStartPre=-/usr/sbin/setcap cap_net_bind_service=+ep /home/mark/Data/frp/frp_0.34.3_linux_amd64/frps
ExecStart=/path_to_frp/frp_0.34.3_linux_amd64/frps -c /path_to_frp/frp_0.34.3_linux_amd64/frps.ini
Restart=always
RestartSec=20s
User=nobody
PermissionsStartOnly=true
LimitNOFILE=infinity

[Install]
WantedBy=multi-user.target

sudo systemctl start frps
sudo systemctl enable ftps

安裝 Gitea Tomcat Nginx
https://yingrenn.blogspot.com/2019/11/gitea-in-ubuntu.html
https://yingrenn.blogspot.com/search/label/tomcat

安裝 x11vnc
因為 Ubuntu 18.04 改 LightDM 到 GDM3, 所以要在 console login 後,才能動作
sudo apt install x11vnc
sudo x11vnc -storepasswd
sudo chown mark.mark ~/.vnc/passwd
sudo vi /etc/systemd/system/x11vnc.service
# Description: Custom Service Unit file
# File: /etc/systemd/system/x11vnc.service
[Unit]
Description="x11vnc"
Requires=display-manager.service
After=display-manager.service

[Service]
ExecStart=/usr/bin/x11vnc -loop -nopw -xkb -repeat -noxrecord -noxfixes -noxdamage -forever -rfbport 5900 -display :1 -auth /run/user/1000/gdm/Xauthority -rfbauth /home/mark/.vnc/passwd
ExecStop=/usr/bin/killall x11vnc
Restart=on-failure
RestartSec=2

[Install]
WantedBy=multi-user.target

安裝 VirtualBox
https://www.virtualbox.org/wiki/Downloads
到官網下載
virtualbox-6.1_6.1.16-140961~Ubuntu~bionic_amd64.deb
Oracle_VM_VirtualBox_Extension_Pack-6.1.16.vbox-extpack
sudo dpkg -i virtualbox-6.1_6.1.16-140961~Ubuntu~bionic_amd64.deb
sudo groupadd win10disk
sudo usermod -a -G win10disk youruser
sudo udevadm info /dev/sdX | grep UUID
E: ID_PART_TABLE_UUID=01234567-89ab-cdef-0123-456789abcde
vi /etc/udev/rules.d/99-win10disk.rules
ENV{ID_PART_TABLE_UUID}=="1234567-89ab-cdef-0123-456789abcde", GROUP="win10disk"
ls -l /dev/sdb
brw-rw---- 1 root win10disk 8, 16 Nov 4 23:33 /dev/sdb
VBoxManage internalcommands createrawvmdk -filename .VirtualBox/Crucial1T.vmdk -rawdisk /dev/sda
sudo usermod -a -G vboxusers youruser
拷貝舊的 VM, 修改 win10.vbox, win10.vbox-prev 內的 uuid
與 .VirtualBox/Crucial1T.vmdk 的 uuid 一致
開啟 VirtualBox
File/Preferences...
Extensions 按 +
選擇剛下載的 Oracle_VM_VirtualBox_Extension_Pack-6.1.16.vbox-extpack


只有登入畫面無法使用滑鼠和鍵盤,其餘正常
sudo apt-get instll xserver-xorg-input-all

2020年12月14日 星期一

Record CSI-Camera to h264 video


gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=1920, height=1080,format=NV12, framerate=30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvv4l2h264enc preset-level=4 MeasureEncoderLatency=1 ! 'video/x-h264, stream-format=(string)byte-stream, alignment=(string)au' ! filesink location=file.h264 -e 


gst-launch-1.0 filesrc location=file.h264 ! h264parse ! nvv4l2decoder ! nvvidconv ! nveglglessink -e


2020年10月28日 星期三

DeepStream 使用 ONNX 模型

參考 How to deploy ONNX models on NVIDIA Jetson Nano using DeepStream
到 ONNX model zoo 找尋 Tiny YOLOv2 model 下載 tinyyolov2-8.onnx (Opset version 8)

Tiny YOLOv2 解說
  Input Shape (1x3x416x416)
  Output Shape (1x125x13x13)
    125 分成 5 個 BBox, 每個有 25 cells(32 bits)
    前 5 個是 bounding box, 後 20 是 class probabilities

$ git clone https://github.com/thatbrguy/Deep-Stream-ONNX.git
$ cd Deep-Stream-ONNX
$ mv ~/Downloads/tinyyolov2-8.onnx .
下載 sample.tar.gz 並解壓縮

$ vi config/config_infer_custom_yolo.txt
onnx-file=../tinyyolov2-8.onnx
parse-bbox-func-name=NvDsInferParseCustomYoloV2Tiny
custom-lib-path=../custom_bbox_parser/libnvdsinfer_custom_bbox_tiny_yolo.so

$ vi custom_bbox_parser/Makefile
SRCFILES:=nvdsparsebbox_tiny_yolo.cpp
TARGET_LIB:=libnvdsinfer_custom_bbox_tiny_yolo.so
DEEPSTREAM_PATH:=/opt/nvidia/deepstream/deepstream-5.0

$ vi custom_bbox_parser/nvdsparsebbox_tiny_yolo.cpp
    //assert (layer.dims.numDims == 3); 修改
    assert (layer.inferDims.numDims == 3);

extern "C" bool NvDsInferParseCustomYoloV2Tiny(
    std::vector<NvDsInferLayerInfo> const& outputLayersInfo,
    NvDsInferNetworkInfo const& networkInfo,
    NvDsInferParseDetectionParams const& detectionParams,
    std::vector<NvDsInferParseObjectInfo>& objectList);
outputLayersInfo: 為 ONNX 的輸出,  outputLayersInfo[0].buffer
networkInfo: 為 ONNX 模型的資訊
detectionParams: 有 config params 的資料,如 numClassesConfigured
objectList: 是我們要產生的輸出

測試
$ deepstream-app -c config/deepstream_app_custom_yolo.txt

2020年10月26日 星期一

Rtsp service on jetson nano

下載 test-launch.c v1.18
安裝必須套件
$ sudo apt-get install libgstrtspserver-1.0 libgstreamer1.0-dev
編譯
$ gcc test-launch.c -o test-launch $(pkg-config --cflags --libs gstreamer-1.0 gstreamer-rtsp-server-1.0)
若出現
undefined reference to `gst_rtsp_media_factory_set_enable_rtcp'
編譯失敗,刪除相關程式碼即可

測試啟動 service
$ ./test-launch "videotestsrc ! omxh265enc ! rtph265pay name=pay0 pt=96"
開啟畫面 (Jetson 平台)
$ gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test ! application/x-rtp, media=video ! queue ! decodebin ! nvvidconv ! nveglglessink
開啟畫面 (一般平台)
$ gst-launch-1.0 rtspsrc location=rtsp://192.168.0.105:8554/test ! application/x-rtp, media=video ! queue ! decodebin ! nvvideoconvert ! nveglglessink
使用 CSI 攝影機
$ ./test-launch "nvarguscamerasrc ! \
video/x-raw(memory:NVMM), format=NV12, width=640, height=480, framerate=30/1 ! \
nvvidconv ! video/x-raw ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"

2020年10月22日 星期四

戴口罩辨識

參考 face_mask_detection
參考 Nvidia TLT(Transfer Learning Toolkit) 轉移學習工具 安裝所需工具

參考資料集
  戴口罩
  無口罩
下載後依據目錄格式擺放

$ docker run --name face-mask --runtime=nvidia --gpus all -it -v /home/user/Data/DeepStreamer/tlt-experiments/:/workspace/tlt-experiments -p 8888:8888 nvcr.io/nvidia/tlt-streamanalytics:v2.0_py3 /bin/bash
root@0aa10155a3bd:/workspace/tlt-experiments# exit
exit
$ docker ps
CONTAINER ID        IMAGE                                         COMMAND                  CREATED             STATUS              PORTS                    NAMES
0aa10155a3bd        nvcr.io/nvidia/tlt-streamanalytics:v2.0_py3   "install_ngc_cli.sh …"   8 minutes ago       Up 6 minutes        0.0.0.0:8888->8888/tcp   face-mask
$ docker exec -it face-mask /bin/bash
# cd tlt-experiments/face-mask-detection
# python -m pip install -r requirements.txt
修改 face-mask-detection/tlt_specs/detectnet_v2_tfrecords_kitti_trainval.txt
root_directory_path: "/workspace/tlt-experiments/face-mask-detection-data/kitti/train"
修改 face-mask-detection/tlt_specs/detectnet_v2_train_resnet18_kitti.txt
tfrecords_path: "/workspace/tlt-experiments/face-mask-detection-data/kitti/tfrecords/kitti_trainval/*"
image_directory_path: "/workspace/tlt-experiments/face-mask-detection-data/kitti/train"
pretrained_model_file: "/workspace/tlt-experiments/detectnet_v2/pretrained_resnet18/tlt_pretrained_detectnet_v2_vresnet18/resnet18.hdf5"
修改 face-mask-detection/tlt_specs/detectnet_v2_retrain_resnet18_kitti.txt
tfrecords_path: "/workspace/tlt-experiments/face-mask-detection-data/kitti/tfrecords/kitti_trainval/*"
mage_directory_path: "/workspace/tlt-experiments/face-mask-detection-data/kitti/train"
pretrained_model_file: "/workspace/tlt-experiments/face-mask-detection-data/detectnet_v2/experiment_dir_pruned/resnet18_nopool_bn_detectnet_v2_pruned.tlt"
修改 face-mask-detection/tlt_specs/detectnet_v2_inference_kitti_tlt.txt
model: "/workspace/tlt-experiments/detectnet_v2/experiment_dir_retrain/weights/resnet18_detector_pruned.tlt"
修改 face-mask-detection/tlt_specs/detectnet_v2_inference_kitti_etlt.txt
trt_engine: "/workspace/tlt-experiments/detectnet_v2/experiment_dir_final_unpruned/resnet18_detector.trt"

產生 kitti 格式的標註
# vi data2kitti.sh
/usr/bin/python data2kitti.py --kaggle-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/Kaggle Medical Mask Dataset/medical-masks-dataset' --mafa-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/MAFA Dataset' --fddb-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/FDDB Dataset' --widerface-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/Wider Face Dataset' --kitti-base-path '/workspace/tlt-experiments/face-mask-detection-data/kitti' --train
/usr/bin/python data2kitti.py --kaggle-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/Kaggle Medical Mask Dataset/medical-masks-dataset' --mafa-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/MAFA Dataset' --fddb-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/FDDB Dataset' --widerface-dataset-path '/workspace/tlt-experiments/face-mask-detection-data/Wider Face Dataset' --kitti-base-path '/workspace/tlt-experiments/face-mask-detection-data/kitti' --val
# ./data2kitti.sh
# jupyter notebook --ip 0.0.0.0 --allow-root
    To access the notebook, open this file in a browser:
        file:///root/.local/share/jupyter/runtime/nbserver-241-open.html
    Or copy and paste one of these URLs:
        http://0aa10155a3bd:8888/?token=29cbb63890972635e00320fe6c66bda30b2c81daa7400f3d
     or http://127.0.0.1:8888/?token=29cbb63890972635e00320fe6c66bda30b2c81daa7400f3d
用瀏覽器開啟 http://127.0.0.1:8888/?token=29cbb63890972635e00320fe6c66bda30b2c81daa7400f3d
進入 http://192.168.0.101:8888/notebooks/face-mask-detection.ipynb
修改下列環境變數到正確目錄
# Setting up env variables for cleaner command line commands.
%env KEY=tlt_encode
# User directory
%env USER_EXPERIMENT_DIR=/workspace/tlt-experiments/detectnet_v2
# Download directory
%env DATA_DOWNLOAD_DIR=/workspace/tlt-experiments/face-mask-detection-data/kitti
%env SPECS_DIR=/workspace/tlt-experiments/face-mask-detection/tlt_specs

產生 tfrecords
!tlt-dataset-convert -d $SPECS_DIR/detectnet_v2_tfrecords_kitti_trainval.txt \
                     -o $DATA_DOWNLOAD_DIR/tfrecords/kitti_trainval/kitti_trainval

查詢 tlt_pretrained_detectnet_v2
!ngc registry model list nvidia/tlt_pretrained_detectnet_v2:*
!mkdir -p $USER_EXPERIMENT_DIR/pretrained_resnet18/
下載 tlt_pretrained_detectnet_v2
!ngc registry model download-version nvidia/tlt_pretrained_detectnet_v2:resnet18 \
    --dest $USER_EXPERIMENT_DIR/pretrained_resnet18

訓練開始
!tlt-train detectnet_v2 -e $SPECS_DIR/detectnet_v2_train_resnet18_kitti.txt \
                        -r $USER_EXPERIMENT_DIR/experiment_dir_unpruned \
                        -k $KEY \
                        -n resnet18_detector

測試訓練結果
!tlt-evaluate detectnet_v2 -e $SPECS_DIR/detectnet_v2_train_resnet18_kitti.txt\
                           -m $USER_EXPERIMENT_DIR/experiment_dir_unpruned/weights/resnet18_detector.tlt \
                           -k $KEY

修剪模型, -ph (threshold), 越大產生的模型越小,推論速度越快,但會不準
!mkdir -p $USER_EXPERIMENT_DIR/experiment_dir_pruned
!tlt-prune -m $USER_EXPERIMENT_DIR/experiment_dir_unpruned/weights/resnet18_detector.tlt \
           -o $USER_EXPERIMENT_DIR/experiment_dir_pruned/resnet18_nopool_bn_detectnet_v2_pruned.tlt \
           -eq union \
           -pth 0.7 \
           -k $KEY

再次訓練修剪過的模型
!tlt-train detectnet_v2 -e $SPECS_DIR/detectnet_v2_retrain_resnet18_kitti.txt \
                        -r $USER_EXPERIMENT_DIR/experiment_dir_retrain \
                        -k $KEY \
                        -n resnet18_detector_pruned

測試再次訓練的模型
!tlt-evaluate detectnet_v2 -e $SPECS_DIR/detectnet_v2_retrain_resnet18_kitti.txt \
                           -m $USER_EXPERIMENT_DIR/experiment_dir_retrain/weights/resnet18_detector_pruned.tlt \
                           -k $KEY

利用模型推論測試的圖形
!tlt-infer detectnet_v2 -e $SPECS_DIR/detectnet_v2_inference_kitti_tlt.txt \
                        -o $USER_EXPERIMENT_DIR/tlt_infer_testing \
                        -i $DATA_DOWNLOAD_DIR/test/images \
                        -k $KEY

輸出可被 DeepStream 使用的模型
!mkdir -p $USER_EXPERIMENT_DIR/experiment_dir_final
# Removing a pre-existing copy of the etlt if there has been any.
import os
output_file=os.path.join(os.environ['USER_EXPERIMENT_DIR'],
                         "experiment_dir_final/resnet18_detector_thermal.etlt")
if os.path.exists(output_file):
    os.system("rm {}".format(output_file))
!tlt-export detectnet_v2 \
            -m $USER_EXPERIMENT_DIR/experiment_dir_retrain/weights/resnet18_detector_pruned.tlt \
            -o $USER_EXPERIMENT_DIR/experiment_dir_final/resnet18_detector_thermal.etlt \
            -k $KEY

修改 ds_configs/config_infer_primary_masknet_gpu.txt
tlt-encoded-model=/home/nano/Data/DeepStream/face-mask-detection/experiment_dir_final/resnet18_detector_thermal.etlt
# 不要用 model-engine-file
input-dims=3;480;640;0

修改 ds_configs/deepstream_app_source1_camera_masknet_gpu.txt
width=640
height=480

完成
$ deepstream-app -c deepstream_app_source1_camera_masknet_gpu.txt


2020年10月20日 星期二

Nvidia TLT(Transfer Learning Toolkit) 轉移學習工具


Transfer Learning 除了可以減少訓練時間,也可減少訓練所需的資料和標註
TLT 使用 docker 技術,可參考 Docker 安裝和使用
TLT 預先訓練一些模型,透過 NGC 下載,參考 NVIDIA GPU Cloud (NGC)
TLT 可以使用 docker 下的 Jupyter notebooks
TLT 產出的模型 .etlt 格式,可以在 DeepStream 和 TensorRT 下使用
Pruning 修剪模型中較不重要的 nodes, 減少記憶體的使用,加快推理
Augmenting Dataset 增強資料集,顏色變換,模糊,旋轉,縮放,翻轉,裁剪

TrafficCamNet 和 DashCamNet 都用於交通
TrafficCamNet 用於固定點,DashCamNet 用於移動的車上

PeopleNet 偵測 人,包包,臉
FaceDetectIR 專注於人臉

Transfer Learning Toolkit
下載預先訓練的模型
準備訓練和測試資料集
訓練模型
測試模型
(Prune)修剪模型
輸出模型

安裝步驟
確認 NVIDIA GPU driver 版本,v410.xx 以上,可使用
$ nvidia-smi
安裝 NVIDIA Docker 參考 nvidia-docker 和 Installing Docker CE on Ubuntu
$ curl https://get.docker.com | sh
開機自動執行 docker service
$ sudo systemctl start docker && sudo systemctl enable docker
新增 apt-key
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
$ sudo apt-get update
$ sudo apt-get install -y nvidia-docker2
$ sudo systemctl restart docker
測試是否安裝完成
$ sudo docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
參考 NVIDIA GPU Cloud (NGC),取得 API Key
$ docker login nvcr.io
Username: $oauthtoken
Password: YOUR_NGC_API_KEY
目前 TLT version 為 v2.0_py3
下載 TLT
$ docker pull nvcr.io/nvidia/tlt-streamanalytics:<version>
執行 TLT
$ docker run --runtime=nvidia -it nvcr.io/nvidia/tlt-streamanalytics:<version> /bin/bash
利用目錄對映,存取本地資料
$ docker run --runtime=nvidia -it -v /home/<username>/tlt-experiments:/workspace/tlt-experiments nvcr.io/nvidia/tlt-streamanalytics:<version> /bin/bash
使用網路 port 對映
$ docker run --runtime=nvidia -it -v /home/<username>/tlt-experiments:/workspace/tlt-experiments -p 8888:8888 nvcr.io/nvidia/tlt-streamanalytics:<version>
執行 jupyter notebook
$ cd examples/
$ jupyter notebook --ip 0.0.0.0 --allow-root

設定 NGC API Key
$ ngc config set
顯示 NGC 上的模組列表
$ ngc registry model list <model_glob_string>
例如:
$ ngc registry model list nvidia/tlt_pretrained_*
顯示 tlt_pretrained_detectnet_v2 的各種版本
$ ngc registry model list nvidia/tlt_pretrained_detectnet_v2:*
下載模組
$ ngc registry model download-version <ORG/model_name:version> -dest <path_to_download_dir>
例如:
$ ngc registry model download-version nvidia/tlt_pretrained_classification:resnet18 --dest $USER_EXPERIMENT_DIR/pretrained_resnet18

Object Detection 物件偵測,label 檔使用 KITTI 格式
但只使用了 Class names 和 Bounding box



2020年10月16日 星期五

DeepStream performance 測量

1. 直接取得時間
gst_pad_add_probe (pad, GST_PAD_PROBE_TYPE_BUFFER, GstPadProbeCallback, NULL, NULL);

GstPadProbeReturn *GstPadProbeCallback(GstPad *pad, GstPadProbeInfo *info, gpointer user_data)
{
    GstBuffer *buf = (GstBuffer *) info->data;
    if (GST_BUFFER_DTS_IS_VALID(buf)) {
        GstClockTime dts = GST_BUFFER_DTS(buf);
        g_print("dts=%ldms ", dts/1000000);
    }
    if (GST_BUFFER_PTS_IS_VALID(buf)) {
        GstClockTime pts = GST_BUFFER_PTS(buf);
        g_print("pts=%ldms ", pts/1000000);
    }
    if (GST_BUFFER_DURATION_IS_VALID(buf)) {
        GstClockTime duration = GST_BUFFER_DURATION(buf);
        g_print("duration=%ldms ", duration/1000000);
    }
    g_print("\n");
}

GStreamer 的 textoverlay & timeoverlay

使用 textoverlay 顯示文字
$ gst-launch-1.0 -v videotestsrc ! textoverlay text="Room A" valignment=top halignment=left font-desc="Sans, 72" ! nvvideoconvert ! nveglglessink

使用 textoverlay 顯示字幕,timeoverlay 顯示時間
字幕檔
$ cat subtitles.srt 
 1
 00:00:03,000 --> 00:00:05,000
 Hello? (3-5s)

 2
 00:00:08,000 --> 00:00:13,000
 Yes, this is a subtitle. Don't
 you like it? (8-13s)

 3
 00:00:18,826 --> 00:01:02,886
 Uh? What are you talking about?
 I don't understand  (18-62s)

$ gst-launch-1.0 -v filesrc location=subtitles.srt ! subparse ! txt.   videotestsrc ! timeoverlay font-desc="Sans, 24" ! textoverlay name=txt shaded-background=yes font-desc="Sans, 24" ! nvvideoconvert ! nveglglessink

2020年10月15日 星期四

DeepStream 之 nvdsosd

nvosd = gst_element_factory_make ("nvdsosd", "nv-onscreendisplay");

不要顯示物件框框
g_object_set (G_OBJECT (nvosd), "display-bbox", 0, NULL);

不要顯示物件說明
g_object_set (G_OBJECT (nvosd), "display-text", 0, NULL);

左上角顯示時間
g_object_set (G_OBJECT (nvosd), "display-clock", 1, NULL);

osd_sink_pad = gst_element_get_static_pad (nvosd, "sink");
gst_pad_add_probe (osd_sink_pad, GST_PAD_PROBE_TYPE_BUFFER,
GstPadProbeCallback, (gpointer)sink, NULL);
gst_object_unref (osd_sink_pad);

static GstPadProbeReturn GstPadProbeCallback(
  GstPad * pad, GstPadProbeInfo * info, gpointer u_data)
{
  GstBuffer *buf = (GstBuffer *) info->data;
  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);
  for (l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next) 
  {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
    for (l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next) 
    {
      obj_meta = (NvDsObjectMeta *) (l_obj->data);
      // NvDsObjectMeta 宣告於 sources/includes/nvdsmeta.h
      // class_id: Holds the index of the object class inferred by the primary detector/classifier.
      // object_id: Holds a unique ID for tracking the object.
      //   UNTRACKED_OBJECT_ID indicates that the object has not been tracked.
      // rect_params: 物件的框框, 有可能來自 tracker
      // text_params: 物件說明 "主分類 track 類別1 類別2..."
      // obj_label: 主分類
      for (GList *l = obj_meta->classifier_meta_list; l != NULL; l = l->next)
      {
        NvDsClassifierMeta *classifierMeta = (NvDsClassifierMeta *) (l->data);
        // NvDsClassifierMeta 宣告於 sources/includes/nvdsmeta.h
        // unique_component_id: 分類 id
        NvDsLabelInfo *pLabel = get_possible_label(classifierMeta);
      }
    }
  }
}

NvDsLabelInfo *get_possible_label (NvDsClassifierMeta * classifierMeta)
{
  NvDsLabelInfo *pLabel = NULL;
  for (GList *n = classifierMeta->label_info_list; n != NULL; n = n->next) {
    NvDsLabelInfo *labelInfo = (NvDsLabelInfo *) (n->data);
    // NvDsLabelInfo 宣告於 sources/includes/nvdsmeta.h
    // result_class_id: 最佳的 分類id
    // result_prob: 可能性
    // result_label: 分類名
    // pResult_label: 當 result_label 太長, 放不下時, 放這裡
    if (pLabel == NULL) {
        pLabel = labelInfo;
    } else if (labelInfo->result_prob > pLabel->result_prob) {
        pLabel = labelInfo;
    }
  }
  return pLabel;
}

2020年10月14日 星期三

deepstream-test4-app 學習紀錄

參考 deepstream-test5 學習紀錄

啟動 zookeeper
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/zookeeper-server-start.sh config/zookeeper.properties

啟動 kafka
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/kafka-server-start.sh config/server.properties

接收觸發事件
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic ObjDetect

$ ./deepstream-test4-app -i /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 -p /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_kafka_proto.so --conn-str="localhost;9092;ObjDetect"


2020年10月13日 星期二

GStreamer 學習紀錄 gst-launch-1.0

利用 gst-launch-1.0 同時撥放影像和聲音

$ gst-launch-1.0 filesrc location="/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4" ! qtdemux name=demux demux.audio_0 ! queue ! avdec_aac ! audioconvert ! alsasink demux.video_0 ! queue ! h264parse ! omxh264dec ! nveglglessink

使用 decodebin
$ gst-launch-1.0 filesrc location="/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4" ! decodebin name=d d. ! nvvidconv ! nveglglessink d. ! queue ! audioconvert ! audioresample ! alsasink

$ gst-launch-1.0 filesrc location="/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4" ! decodebin name=d d.src_0 ! nvvidconv ! nveglglessink d.src_1 ! queue ! audioconvert ! audioresample ! alsasink

使用 uridecodebin
nano@nano-desktop:~$ gst-launch-1.0 uridecodebin uri="file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4" name=d d. ! nvvidconv ! nveglglessink d. ! queue ! audioconvert ! audioresample ! alsasink

只撥放影片
$ gst-launch-1.0 filesrc location="/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4" ! decodebin name=d d.src_0 ! nvvidconv ! nveglglessink

2020年10月7日 星期三

deepstream-test5 學習紀錄

參考 kafka 學習紀錄 (操作)
參考 GStreamer
參考 NVIDIA DeepStream SDK Developer Guide

啟動 zookeeper
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/zookeeper-server-start.sh config/zookeeper.properties

啟動 kafka
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/kafka-server-start.sh config/server.properties

$ vi configs/test5_config_file_src_infer.txt
[sourceX]
# type 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CAMERA_CSI
type=4
uri=rtsp://user:passwd@192.168.0.xxx:554/chID=1&streamType=main&linkType=tcp
smart-record=1
# 可透過下列命令觸發停止錄影
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic Record
# >{"command":"start-recording","sensor":{"id":"0"}}
# >{"command":"stop-recording","sensor":{"id":"0"}}
smart-record=2
# 每10秒觸發一次 start / stop
[sink1]
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
msg-broker-conn-str=localhost;9092
topic=ObjDetect
[message-consumer0]
subscribe-topic-list=Record

好用的除錯訊息,宣告於
deepstream-5.0/sources/apps/apps-common/includes/deepstream_common.h
NVGSTDS_ERR_MSG_V
NVGSTDS_INFO_MSG_V
NVGSTDS_WARN_MSG_V
也可使用 GStreamer 學習紀錄 的列印除錯訊息
也可使用 GLib Reference Manual 的列印除錯訊息

執行命令
nano@nano-desktop:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test5a$ ./deepstream-test5-app -c configs/test5_config_file_src_infer.txt 
在執行命令的 terminal 中可用鍵盤下令
h: Print this help
q: Quit
p: Pause
r: Resume

使錄影命令不用填寫時間
vi deepstream-5.0/sources/apps/apps-common/src/deepstream_c2d_msg_util.c
NvDsC2DMsg*
nvds_c2d_parse_cloud_message (gpointer data, guint size)
{
...
    if (startRec) {
      time_t startUtc, endUtc, curUtc;
      const gchar *timeStr;
      if (json_object_has_member (object, "start")) {
...
      } else {
        start = 0;
        NVGSTDS_WARN_MSG_V ("wrong message format, missing 'start' field.");
        //goto error;
      }
...
}

接收觸發事件
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic ObjDetect --from-beginning

發送錄影,停止錄影命令
nano@nano-desktop:~/Data/DeepStream/kafka_2.13-2.6.0$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic Record
>{"command":"start-recording","sensor":{"id":"0"}}
>{"command":"stop-recording","sensor":{"id":"0"}}

vi deepstream-5.0/sources/apps/apps-common/src/deepstream_source_bin.c
create_camera_source_bin
create_uridecode_src_bin
create_rtsp_src_bin


2020年10月5日 星期一

GStreamer 學習紀錄

playbin 會自動使用 autoaudiosink 和 autovideosink
下列程式碼可使用自己要的 sink

  pipeline =
      gst_parse_launch(
      "playbin uri=file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4",
      NULL);
  GstElement *sink = gst_element_factory_make ("nveglglessink", "sink");
  g_object_set(G_OBJECT(pipeline), "video-sink", sink, NULL);


或者調整 registry rank
    GstRegistry *plugins_register = gst_registry_get();
    GstPluginFeature *nveglglessink = gst_registry_lookup_feature(plugins_register, "nveglglessink");
    if (nveglglessink == NULL) {
      g_printerr("gst_registry_lookup_feature return NULL \n");
      return 0;
    }
    guint nveglglessink_rank = gst_plugin_feature_get_rank(nveglglessink);
    g_print("nveglglessink rank=%u\n", nveglglessink_rank);
    gst_plugin_feature_set_rank(nveglglessink, GST_RANK_PRIMARY + 11);
    gst_object_unref(nveglglessink);

使用 gst-launch-1.0
$ gst-launch-1.0 playbin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm video-sink=nveglglessink

2020年9月29日 星期二

GStreamer 學習紀錄-除錯

列印除錯訊息
GST_ERROR()
GST_WARNING()
GST_INFO()
GST_DEBUG()
GST_LOG()

依據環境變數 GST_DEBUG
0: none, 所有訊息皆不印
1: ERROR
2: WARNING
4: INFO
5: DEBUG
6: LOG

產生 pipeline 圖
設定環境變數 GST_DEBUG_DUMP_DOT_DIR=file_path
程式碼在 gst_bus_timed_pop_filtered 之後加入
GST_DEBUG_BIN_TO_DOT_FILE(GST_BIN(pipeline), GST_DEBUG_GRAPH_SHOW_MEDIA_TYPE, "file_name");
執行程式後會產生 file_path/file_name.dot
執行下列命令產生 png 檔
$ dot -Tpng -o file_name.png file_name.dot