網頁

顯示具有 TensorRT 標籤的文章。 顯示所有文章
顯示具有 TensorRT 標籤的文章。 顯示所有文章

2023年10月11日 星期三

安裝 Ubuntu 20.04

$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install ssh

$ sudo vi /etc/fstab
#中間的空格要使用 tab
ip:/share_folder /mnt/mount_folder nfs defaults,bg 0 0
$ cd /mnt
$ sudo mkdir QNAP_A QNAP_B
$ sudo mount -a

$ mkdir -p ~/.config/autostart
$ cp /usr/share/applications/vino-server.desktop ~/.config/autostart/
$ gsettings set org.gnome.Vino prompt-enabled false
$ gsettings set org.gnome.Vino require-encryption false
$ gsettings set org.gnome.Vino authentication-methods "['vnc']"
$ gsettings set org.gnome.Vino vnc-password $(echo -n 'ChangeToYourPasswd'|base64)
$ sudo vi /etc/gdm3/custom.conf
WaylandEnable=false
AutomaticLoginEnable = true
AutomaticLogin = UserLoginName
$ vi vino.sh
DISP=`ps -u $(id -u) -o pid= | \
    while read pid; do
        cat /proc/$pid/environ 2>/dev/null | tr '\0' '\n' | grep '^DISPLAY=:'
    done | grep -o ':[0-9]*' | sort -u`
echo $DISP
/usr/lib/vino/vino-server --display=$DISP
$ chmod +x vino.sh

依據 使用最新版本的 driver
CUDA Toolkit and Corresponding Driver Versions
https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html
dGPU Setup for Ubuntu
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html
Ubuntu 20.04
GStreamer 1.16.3
NVIDIA driver 525.125.06
CUDA 12.1
TensorRT 8.5.3.1

$ sudo ubuntu-drivers devices
$ sudo apt-get install nvidia-driver-535
$ sudo reboot
$ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.1-1_all.deb
$ sudo dpkg -i cuda-keyring_1.1-1_all.deb
$ sudo apt-get update
$ sudo apt-get -y install cuda-12-2
$ sudo apt-get -y install cuda-12-1

安裝 cuDNN
參考 https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html
到 2.2. Downloading cuDNN for Linux(https://developer.nvidia.com/cudnn)
下載 Local Install for Ubuntu18.04 x86_64(Deb)
$ sudo apt-get install zlib1g
$ sudo dpkg -i cudnn-local-repo-ubuntu2004-8.9.5.29_1.0-1_amd64.deb
sudo cp /var/cudnn-local-repo-ubuntu2004-8.9.5.29/cudnn-local-98C06E99-keyring.gpg /usr/share/keyrings/
$ sudo apt-get update
$ apt list -a libcudnn8
$ sudo apt-get install libcudnn8=8.9.5.29-1+cuda12.2
$ sudo apt-get install libcudnn8-dev=8.9.5.29-1+cuda12.2
$ sudo apt-get install libcudnn8-samples=8.9.5.29-1+cuda12.2
$ update-alternatives --display libcudnn
$ cp -r /usr/src/cudnn_samples_v8/ .
$ cd cudnn_samples_v8/mnistCUDNN/
$ sudo apt-get install libfreeimage3 libfreeimage-dev
$ make clean && make
$ ./mnistCUDNN
...
Test passed!

安裝 TensorRT 8.6.1
https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-861/install-guide/index.html
$ sudo apt-get install python3-pip
$ sudo apt-get install python3.8.venv
$ python3 -m venv envs/tensorrt
$ source envs/tensorrt/bin/activate
$ pip3 install --upgrade pip
$ python3 -m pip install --extra-index-url https://pypi.nvidia.com tensorrt_libs
$ python3 -m pip install --extra-index-url https://pypi.nvidia.com tensorrt_bindings
$ python3 -m pip install --upgrade tensorrt
$ python3 -m pip install --upgrade tensorrt_lean
$ python3 -m pip install --upgrade tensorrt_dispatch
測試  TensorRT Python
$ python3
>>> import tensorrt
>>> print(tensorrt.__version__)
>>> assert tensorrt.Builder(tensorrt.Logger())
>>> import tensorrt_lean as trt
>>> print(trt.__version__)
>>> assert trt.Builder(trt.Logger())
>>> import tensorrt_dispatch as trt
>>> print(trt.__version__)
>>> assert trt.Builder(trt.Logger())

連結 https://developer.nvidia.com/tensorrt 按 GET STARTED
連結 https://developer.nvidia.com/tensorrt-getting-started 按 DOWNLOAD NOW
選擇 TensorRT 8
選擇 TensorRT 8.6 GA
TensorRT 8.6 GA for Ubuntu 20.04 and CUDA 12.0 and 12.1 DEB local repo Package
$ sudo dpkg -i nv-tensorrt-local-repo-ubuntu2004-8.6.1-cuda-12.0_1.0-1_amd64.deb
$ sudo cp /var/nv-tensorrt-local-repo-ubuntu2004-8.6.1-cuda-12.0/nv-tensorrt-local-9A1EDFBA-keyring.gpg /usr/share/keyrings/
$ sudo apt-get update
$ sudo apt-get install tensorrt
$ sudo apt-get install libnvinfer-lean8
$ sudo apt-get install libnvinfer-vc-plugin8
$ sudo apt-get install python3-libnvinfer-lean
$ sudo apt-get install python3-libnvinfer-dispatch
$ python3 -m pip install numpy
$ sudo apt-get install python3-libnvinfer-dev
$ python3 -m pip install protobuf
$ sudo apt-get install uff-converter-tf
$ python3 -m pip install numpy onnx
$ sudo apt-get install onnx-graphsurgeon
確認安裝
$ dpkg-query -W tensorrt
tensorrt        8.6.1.6-1+cuda12.0

安裝 DeepStream
$ sudo apt-get install libssl1.1
$ sudo apt-get install libgstreamer1.0-0
$ sudo apt-get install gstreamer1.0-tools
$ sudo apt-get install gstreamer1.0-plugins-good
$ sudo apt-get install gstreamer1.0-plugins-bad
$ sudo apt-get install gstreamer1.0-plugins-ugly
$ sudo apt-get install gstreamer1.0-libav
$ sudo apt-get install libgstreamer-plugins-base1.0-dev
$ sudo apt-get install libgstrtspserver-1.0-0
$ sudo apt-get install libjansson4
$ sudo apt-get install libyaml-cpp-dev
$ sudo apt-get install libjsoncpp-dev
$ sudo apt-get install protobuf-compiler
$ sudo apt-get install gcc
$ sudo apt-get install make
$ sudo apt-get install git
$ sudo apt-get install python3

$ git clone https://github.com/edenhill/librdkafka.git
$ cd librdkafka
$ git reset --hard 7101c2310341ab3f4675fc565f64f0967e135a6a
$ ./configure
$ make
$ sudo make install
$ sudo mkdir -p /opt/nvidia/deepstream/deepstream-6.3/lib
$ sudo cp /usr/local/lib/librdkafka* /opt/nvidia/deepstream/deepstream-6.3/lib

https://catalog.ngc.nvidia.com/orgs/nvidia/resources/deepstream
下載 deepstream-6.3_6.3.0-1_arm64.deb
$ wget --content-disposition 'https://api.ngc.nvidia.com/v2/resources/nvidia/deepstream/versions/6.3/files/deepstream-6.3_6.3.0-1_amd64.deb'
$ sudo apt-get install ./deepstream-6.3_6.3.0-1_amd64.deb
$ cd /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app
$ deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt 

安裝 Docker
https://docs.docker.com/engine/install/ubuntu/
$ sudo apt-get update
$ sudo apt-get install ca-certificates curl gnupg
$ sudo install -m 0755 -d /etc/apt/keyrings
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
$ sudo chmod a+r /etc/apt/keyrings/docker.gpg
$ echo \
  "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
$ sudo apt-get update
$ sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
$ sudo docker run --rm hello-world

安裝 NVIDIA Container Toolkit
參考 https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
$ curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
  && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
    sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
    sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list \
  && \
    sudo apt-get update
$ sudo apt-get install -y nvidia-container-toolkit
$ sudo nvidia-ctk runtime configure --runtime=docker
$ sudo systemctl restart docker
$ sudo groupadd docker
$ sudo usermod -a -G docker $USER
$ docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi

安裝 NGC CLI
參考 https://ngc.nvidia.com/setup/installers/cli
$ wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/ngc-apps/ngc_cli/versions/3.30.1/files/ngccli_linux.zip -O ngccli_linux.zip && unzip ngccli_linux.zip
$ find ngc-cli/ -type f -exec md5sum {} + | LC_ALL=C sort | md5sum -c ngc-cli.md5
$ sha256sum ngccli_linux.zip
$ chmod u+x ngc-cli/ngc
$ echo "export PATH=\"\$PATH:$(pwd)/ngc-cli\"" >> ~/.bash_profile && source ~/.bash_profile
$ ngc config set
# 直接 enter 即可
$ docker login nvcr.io
Username: $oauthtoken
Password: <Your API Key>

用 Docker 開發 DeepStream 6.3
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_docker_containers.html
$ sudo docker pull nvcr.io/nvidia/deepstream:6.3-gc-triton-devel
$ export DISPLAY=:0
$ xhost +
$ docker run -it --rm --net=host --gpus all -e DISPLAY=$DISPLAY --device /dev/snd -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream:6.3-gc-triton-devel
# cd samples/configs/deepstream-app
# deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt 
# exit
$ sudo docker ps -a
$ sudo docker stop container_id
$ sudo docker rm container_id
$ sudo docker image list
$ sudo docker image rm image_id

2023年7月3日 星期一

YOLOv8 and TensorRT

參考 YOLOv8 GitHub 官網

1. 下載 DeepStream-Yolo, Ultralytics YOLOv8
git clone https://github.com/marcoslucianops/DeepStream-Yolo.git
git clone https://github.com/ultralytics/ultralytics.git /mnt/Data/DeepStream/DeepStream-Yolo/ultralytics

2. 建立 deepstream_yolo docker container
docker_run.sh
xhost +
docker run --name='deepstream_yolo' --gpus all -it --net=host --privileged \
  -v /tmp/.X11-unix:/tmp/.X11-unix \
  -v /etc/localtime:/etc/localtime \
  -v /mnt/Data/DeepStream/DeepStream-Yolo/DeepStream-Yolo:/home/DeepStream-Yolo \
  -v /mnt/Data/DeepStream/DeepStream-Yolo/ultralytics:/home/ultralytics \
  -v /mnt/Data/DeepStream/DeepStream-Yolo/read_me:/home/read_me \
  -v /mnt/Data/DeepStream/DeepStream-Yolo/datasets:/home/datasets \
  -v /mnt/CT1000SSD/ImageData/Light:/home/Light \
  -e DISPLAY=$DISPLAY \
  -w /home/read_me \
  nvcr.io/nvidia/deepstream:6.2-devel
  
3. 在 Docker 內, 安裝 DeepStream-Yolo
apt-get install build-essential
/opt/nvidia/deepstream/deepstream/user_additional_install.sh
cd /home/DeepStream-Yolo
CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo

4. 在 Docker 內, 安裝 Ultralytics YOLOv8
#python3 -m pip install --upgrade pip
pip3 install --upgrade pip
pip3 install protobuf numpy
cd /home/ultralytics
#pip install -e .
pip3 install -r requirements.txt
python3 setup.py install
pip3 install onnx onnxsim onnxruntime

5. 在 Docker 內, 下載,轉換,測試 yolov8s.pt, yolov8s.pt 模型
cd /home/ultralytics
cp /home/DeepStream-Yolo/utils/export_yoloV8.py /home/ultralytics
wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt
wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt
python3 export_yoloV8.py -w yolov8s.pt --dynamic
python3 export_yoloV8.py -w yolov8n.pt --dynamic
cp yolov8s.onnx labels.txt /home/DeepStream-Yolo
cp yolov8n.onnx labels.txt /home/DeepStream-Yolo

6. 移除 deepstream_yolo container
$ docker container rm deepstream_yolo

7. 重新進入 Docker
docker_attach.sh
xhost +
docker start deepstream_yolo
docker attach deepstream_yolo

8. 轉換模型格式為 onnx
yolov8n.py
from ultralytics import YOLO

# Load a model
model = YOLO("yolov8n.yaml")  # build a new model from scratch
model = YOLO("yolov8n.pt")  # load a pretrained model (recommended for training)

# Use the model
model.train(data="coco128.yaml", epochs=3)  # train the model
metrics = model.val()  # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg")  # predict on an image
path = model.export(format="onnx")  # export the model to ONNX format

執行 python3 yolov8n.py 出現下列錯誤
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
修正方式
$ sudo systemctl stop docker
取得 container id
$ docker inspect deepstream_yolo | grep Id
"Id": "???????"
編輯 container 的 ShmSize
$ sudo vi /var/lib/docker/containers/your_container_id/hostconfig.json
"ShmSize":8589934592
$ sudo systemctl restart docker
$ ./docker_attach.sh

9. 在 DeepStream 中測試 onnx 模型
# cd /home/DeepStream-Yolo

# vi config_infer_primary_yoloV8.txt
onnx-file=yolov8s.onnx
onnx-file=yolov8n.onnx

# vi deepstream_app_config.txt
uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
uri=rtsp://root:A1234567@192.168.0.107:554/live1s1.sdp
live-source=0
live-source=1
config-file=config_infer_primary.txt
config-file=config_infer_primary_yoloV8.txt
file-loop=0
file-loop=1

# deepstream-app -c deepstream_app_config.txt

10. 準備自己的圖形資料, PASCAL VOC(LabelImg 產生的 xml) 格式轉換成 txt
prepare_detect.py
import cv2
import os
import random
import re
import xml.etree.ElementTree as ET

import numpy as np

LIGHT_CLASSES_LIST = [
    'forward_right',
    'others',
    'red',
    'red_left',
    'yellow',
    ]
        
def save_false_positives(img_org, iName, xName, tag, classIdx, 
        clip_x0, clip_y0, clip_x1, clip_y1):
    img_new = img_org[clip_y0:clip_y1, clip_x0:clip_x1]
    fPath, fName = os.path.split(iName)
    fName, fExt = os.path.splitext(fName)
    fName = fName + tag + fExt
    rndPaths = ['train', 'val', 'test']
    rndPath = random.choices(rndPaths, weights=(8,1,1))[0]
    iName = os.path.join('/home/datasets/Light/images', rndPath, fName)
    cv2.imwrite(iName, img_new)
        
def convert_box(size, box):
    dw, dh = 1. / size[0], 1. / size[1]
    x, y, w, h = (box[0] + box[1]) / 2.0 - 1, (box[2] + box[3]) / 2.0 - 1, box[1] - box[0], box[3] - box[2]
    return x * dw, y * dh, w * dw, h * dh
          
def save_file(img_org, iName, xName, tag, classIdx, 
        p0x, p0y, p1x, p1y, p2x, p2y, p3x, p3y, 
        img_w, img_h, xmin, ymin, xmax, ymax,
        clip_x0, clip_y0, clip_x1, clip_y1):
    img_new = img_org[clip_y0:clip_y1, clip_x0:clip_x1]
    fPath, fName = os.path.split(iName)
    fName, fExt = os.path.splitext(fName)
    fName = fName + tag + fExt
    rndPaths = ['train', 'val', 'test']
    rndPath = random.choices(rndPaths, weights=(8,1,1))[0]
    iName = os.path.join('/home/datasets/Light/images', rndPath, fName)
    cv2.imwrite(iName, img_new)
    
    w = clip_x1 - clip_x0
    h = clip_y1 - clip_y0
    xmin = xmin - clip_x0
    ymin = ymin - clip_y0
    xmax = xmax - clip_x0
    ymax = ymax - clip_y0
    bb = convert_box((w, h), (xmin, xmax, ymin, ymax))
    fPath, fName = os.path.split(xName)
    fName, fExt = os.path.splitext(fName)
    fName = fName + tag + '.txt'
    tName = os.path.join('/home/datasets/Light/labels', rndPath, fName)
    with open(tName, 'w') as f:
        f.write(" ".join([str(a) for a in (classIdx, *bb)]) + '\n')
        
def gen_img_yolo(iName, xName):
    tree = ET.parse(open(xName))
    root = tree.getroot()
    img_w = int(root.find('size').find('width').text)
    img_h = int(root.find('size').find('height').text)
    for idx, object in enumerate(root.findall('object')):
        name = object.find('name').text
        classIdx = LIGHT_CLASSES_LIST.index(name)
        #print(classIdx, name)
        bndbox = object.find('bndbox')
        p0x = int(bndbox.find('p0x').text)
        p0y = int(bndbox.find('p0y').text)
        p1x = int(bndbox.find('p1x').text)
        p1y = int(bndbox.find('p1y').text)
        p2x = int(bndbox.find('p2x').text)
        p2y = int(bndbox.find('p2y').text)
        p3x = int(bndbox.find('p3x').text)
        p3y = int(bndbox.find('p3y').text)
        xmin = int(bndbox.find('xmin').text)
        ymin = int(bndbox.find('ymin').text)
        xmax = int(bndbox.find('xmax').text)
        ymax = int(bndbox.find('ymax').text)
        if xmin != p0x or xmin != p3x or ymin != p0y or ymin != p1y or \
                xmax != p1x or xmax != p2x or ymax != p2y or ymax != p3y:
            print('error:bndbox', xName)
            exit()
        if idx > 0:
            print('error:object', xName)
            exit()
    img_org = cv2.imread(iName)
    if img_org.shape[0] != img_h or img_org.shape[1] != img_w:
        print(img_org.shape, (img_h, img_w))
        exit()
    img = np.copy(img_org)

    clip_x0 = random.randrange(0, int(xmin*0.5))
    clip_y0 = random.randrange(0, int(ymin*0.5))
    clip_x1 = random.randrange(int(xmax + (img_w-xmax)*0.5), img_w+1)
    clip_y1 = random.randrange(int(ymax + (img_h-ymax)*0.5), img_h+1)
    save_file(img_org, iName, xName, '', classIdx, 
            p0x, p0y, p1x, p1y, p2x, p2y, p3x, p3y, 
            img_w, img_h, xmin, ymin, xmax, ymax,
            clip_x0, clip_y0, clip_x1, clip_y1)
    ratio = (xmax - xmin) / img_w
    if ratio < 0.3:
        clip_x0 = random.randrange(int(xmin*0.3), int(xmin*0.8))
        clip_y0 = random.randrange(int(ymin*0.3), int(ymin*0.8))
        clip_x1 = random.randrange(int(xmax + (img_w-xmax)*0.2), int(xmax + (img_w-xmax)*0.7))
        clip_y1 = random.randrange(int(ymax + (img_h-ymax)*0.2), int(ymax + (img_h-ymax)*0.7))
        save_file(img_org, iName, xName, '_a', classIdx, 
                p0x, p0y, p1x, p1y, p2x, p2y, p3x, p3y, 
                img_w, img_h, xmin, ymin, xmax, ymax,
                clip_x0, clip_y0, clip_x1, clip_y1)
        clip_x0 = random.randrange(int(xmin*0.5), int(xmin*0.9))
        clip_y0 = random.randrange(int(ymin*0.5), int(ymin*0.9))
        clip_x1 = random.randrange(int(xmax + (img_w-xmax)*0.1), int(xmax + (img_w-xmax)*0.5))
        clip_y1 = random.randrange(int(ymax + (img_h-ymax)*0.1), int(ymax + (img_h-ymax)*0.5))
        save_file(img_org, iName, xName, '_b', classIdx, 
                p0x, p0y, p1x, p1y, p2x, p2y, p3x, p3y, 
                img_w, img_h, xmin, ymin, xmax, ymax,
                clip_x0, clip_y0, clip_x1, clip_y1)
        if xmin > (img_w - xmax):
            if ymin > (img_h - ymax):
                clip_x0 = random.randrange(0, int(xmin*0.8))
                clip_y0 = random.randrange(0, int(ymin*0.8))
                clip_x1 = random.randrange(int(xmin), int(xmin+(xmax-xmin)*0.8))
                clip_y1 = random.randrange(int(ymin), int(ymin+(ymax-ymin)*0.8))
                root.remove(object)
                save_false_positives(img_org, iName, xName, '_f0', classIdx, 
                        clip_x0, clip_y0, clip_x1, clip_y1)
            else:
                clip_x0 = random.randrange(0, int(xmin*0.8))
                clip_y0 = random.randrange(int(ymin+(ymax-ymin)*0.2), int(ymax))
                clip_x1 = random.randrange(int(xmin), int(xmin + (xmax-xmin)*0.8))
                clip_y1 = random.randrange(int(ymax+(img_h-ymax)*0.2), img_h)
                root.remove(object)
                save_false_positives(img_org, iName, xName, '_f1', classIdx, 
                        clip_x0, clip_y0, clip_x1, clip_y1)
        else:
            if ymin > (img_h - ymax):
                clip_x0 = random.randrange(int(xmin+(xmax-xmin)*0.2), int(xmax))
                clip_y0 = random.randrange(0, int(ymin*0.8))
                clip_x1 = random.randrange(int(xmax + (img_w-xmax)*0.2), img_w)
                clip_y1 = random.randrange(int(ymin), int(ymin+(ymax-ymin)*0.8))
                root.remove(object)
                save_false_positives(img_org, iName, xName, '_f2', classIdx, 
                        clip_x0, clip_y0, clip_x1, clip_y1)
            else:
                clip_x0 = random.randrange(int(xmin+(xmax-xmin)*0.2), int(xmax))
                clip_y0 = random.randrange(int(ymin+(ymax-ymin)*0.2), int(ymax))
                clip_x1 = random.randrange(int(xmax + (img_w-xmax)*0.2), img_w)
                clip_y1 = random.randrange(int(ymax+(img_h-ymax)*0.2), img_h)
                root.remove(object)
                save_false_positives(img_org, iName, xName, '_f3', classIdx, 
                        clip_x0, clip_y0, clip_x1, clip_y1)
    elif ratio < 0.7:
        clip_x0 = random.randrange(int(xmin*0.1), int(xmin*0.7))
        clip_y0 = random.randrange(int(ymin*0.1), int(ymin*0.7))
        clip_x1 = random.randrange(int(xmax + (img_w-xmax)*0.3), int(xmax + (img_w-xmax)*0.9))
        clip_y1 = random.randrange(int(ymax + (img_h-ymax)*0.3), int(ymax + (img_h-ymax)*0.9))
        save_file(img_org, iName, xName, '_c', classIdx, 
                p0x, p0y, p1x, p1y, p2x, p2y, p3x, p3y, 
                img_w, img_h, xmin, ymin, xmax, ymax,
                clip_x0, clip_y0, clip_x1, clip_y1)
        if xmin > (img_w - xmax):
            if ymin > (img_h - ymax):
                clip_x0 = random.randrange(0, int(xmin*0.8))
                clip_y0 = random.randrange(0, int(ymin*0.8))
                clip_x1 = random.randrange(int(xmin), int(xmin+(xmax-xmin)*0.8))
                clip_y1 = random.randrange(int(ymin), int(ymin+(ymax-ymin)*0.8))
                root.remove(object)
                save_false_positives(img_org, iName, xName, '_f4', classIdx, 
                        clip_x0, clip_y0, clip_x1, clip_y1)
            else:
                clip_x0 = random.randrange(0, int(xmin*0.8))
                clip_y0 = random.randrange(int(ymin+(ymax-ymin)*0.2), int(ymax))
                clip_x1 = random.randrange(int(xmin), int(xmin + (xmax-xmin)*0.8))
                clip_y1 = random.randrange(int(ymax+(img_h-ymax)*0.2), img_h)
                root.remove(object)
                save_false_positives(img_org, iName, xName, '_f5', classIdx, 
                        clip_x0, clip_y0, clip_x1, clip_y1)
        else:
            if ymin > (img_h - ymax):
                clip_x0 = random.randrange(int(xmin+(xmax-xmin)*0.2), int(xmax))
                clip_y0 = random.randrange(0, int(ymin*0.8))
                clip_x1 = random.randrange(int(xmax + (img_w-xmax)*0.2), img_w)
                clip_y1 = random.randrange(int(ymin), int(ymin+(ymax-ymin)*0.8))
                root.remove(object)
                save_false_positives(img_org, iName, xName, '_f6', classIdx, 
                        clip_x0, clip_y0, clip_x1, clip_y1)
            else:
                clip_x0 = random.randrange(int(xmin+(xmax-xmin)*0.2), int(xmax))
                clip_y0 = random.randrange(int(ymin+(ymax-ymin)*0.2), int(ymax))
                clip_x1 = random.randrange(int(xmax + (img_w-xmax)*0.2), img_w)
                clip_y1 = random.randrange(int(ymax+(img_h-ymax)*0.2), img_h)
                root.remove(object)
                save_false_positives(img_org, iName, xName, '_f7', classIdx, 
                        clip_x0, clip_y0, clip_x1, clip_y1)
    elif ratio < 1.0:
        pass
    return

def recursive_folder(path):
    files = os.listdir(path)
    files.sort()
    for file in files:
        fullName = os.path.join(path, file)
        if os.path.isfile(fullName):
            fPath, fName = os.path.split(fullName)
            fName, fExt = os.path.splitext(fName)
            if fExt in ('.jpg'):
                xPath = fPath + '.xml'
                xName = fName + '.xml'
                xFName = os.path.join(xPath, xName)
                if os.path.isfile(xFName):
                    gen_img_yolo(fullName, xFName)
                else:
                    print(xFName)
        else:
            recursive_folder(fullName)

def main():
    recursive_folder('/home/Light')

if __name__ == '__main__':
    main()

11. 訓練自己的模型
from ultralytics import YOLO

# Load a model
model = YOLO('yolov8n.pt')  # load a pretrained model (recommended for training)

# Train the model
model.train(data='VOC.yaml', epochs=100, imgsz=640)

12. 查詢 onnx 模型的輸出輸入層
import onnx
model = onnx.load('yolov8n.onnx')
g_in = model.graph.input
g_out = model.graph.output


2022年7月5日 星期二

Ubuntu 18.04 重灌

因為 Ubuntu 18.04 只能安裝 DeepStream 6.0.1
在 Ubuntu 20.04 才能安裝 DeepStream 6.1
所以在 Ubuntu 18.04 上安裝 Docker Ubuntu 20.04
版本對應如下
===========================
DS 6.1
Ubuntu 20.04
GCC 9.4.0
CUDA 11.6.1
cuDNN 8.4.0.27
TRT 8.2.5.1
Display Driver:R510.47.03
GStreamer 1.16.2
OpenCV 4.2.0
deepstream:6.1
===========================
DS 6.0.1
Ubuntu 18.04
GCC 7.3.0
CUDA 11.4.1
cuDNN 8.2+
TRT 8.0.1
Display Driver:R470.63.01
GStreamer 1.14.5
OpenCV 3.4.0
deepstream:6.0.1
===========================

安裝作業系統
BIOS 選擇開機
Install Ubuntu
Installation type 選 Something else
Create partition/Mount point 選擇 /

Settings/Details/Users
Unlock
Automatic Login: ON

更新系統,安裝一些常用套件
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install ssh
$ sudo apt-get install python3-pip

mount nfs
$ sudo apt-get install nfs-common
$ sudo mount -t nfs ip:/share_folder /mount_folder
$ sudo vi /etc/fstab
#中間的空格要使用 tab
ip:/share_folder /mnt/mount_folder nfs defaults,bg 0 0
$ sudo mount -a

vnc
$ sudo apt-get install x11vnc
$ sudo x11vnc -storepasswd
$ sudo chown user.group ~/.vnc/passwd

安裝顯示卡驅動,CUDA 和 CUDNN
$ sudo ubuntu-drivers devices
$ sudo apt-get install nvidia-driver-510

https://developer.nvidia.com/cuda-downloads
選擇 Archive of Previous CUDA Releases
選擇 CUDA Toolkit 11.4.1
選擇 deb(local), deb(network) 不可使用了
$ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin
$ sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600
$ wget https://developer.download.nvidia.com/compute/cuda/11.4.1/local_installers/cuda-repo-ubuntu1804-11-4-local_11.4.1-470.57.02-1_amd64.deb
$ sudo dpkg -i cuda-repo-ubuntu1804-11-4-local_11.4.1-470.57.02-1_amd64.deb
$ sudo apt-key add /var/cuda-repo-ubuntu1804-11-4-local/7fa2af80.pub
$ sudo apt-get update
$ sudo apt-get -y install cuda-11-4
之後再安裝一遍 CUDA Toolkit 11.3.1(因為 TensorRT 8.0.1 需要 CUDA-11-3)
$ sudo apt-get -y install cuda-11-3
之後再安裝一遍 CUDA Toolkit 11.6.1(因為 DeepStream 6.1 需要 CUDA-11-6)
$ sudo apt-get -y install cuda-11-6
$ update-alternatives --display cuda
$ update-alternatives --config cuda


參考 https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html
下載 Local Install for Ubuntu18.04 x86_64(Deb)
$ sudo apt-get install zlib1g
$ sudo dpkg -i cudnn-local-repo-ubuntu1804-8.4.1.50_1.0-1_amd64.deb
$ sudo cp /var/cudnn-local-repo-ubuntu1804-8.4.1.50/cudnn-local-BA71F057-keyring.gpg /usr/share/keyrings/
$ sudo apt-get update
$ apt list -a libcudnn8
$ sudo apt-get install libcudnn8=8.4.1.50-1+cuda11.6
$ sudo apt-get install libcudnn8-dev=8.4.1.50-1+cuda11.6
$ sudo apt-get install libcudnn8-samples=8.4.1.50-1+cuda11.6

安裝 TensorRT 8.0.1
因為使用 Docker 安裝 DeepStream 6.1, 所以不用安裝 TensorRT 8.2.5.1
參考 https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html
TensorRT Archives 切換安裝文件版本
在 3. Downloading TensorRT
連結 https://developer.nvidia.com/tensorrt 按 GET STARTED
連結 https://developer.nvidia.com/tensorrt-getting-started 按 DOWNLOAD NOW
選擇 TensorRT 8
選擇 TensorRT 8.0 GA
選擇 TensorRT 8.0.1 GA for Ubuntu 18.04 and CUDA 11.3 DEB local repo package
$ sudo dpkg -i nv-tensorrt-repo-ubuntu1804-cuda11.3-trt8.0.1.6-ga-20210626_1-1_amd64.deb
$ sudo apt-key add /var/nv-tensorrt-repo-ubuntu1804-cuda11.3-trt8.0.1.6-ga-20210626/7fa2af80.pub
$ sudo apt-get update
$ sudo apt --fix-broken install
$ sudo apt-get upgrade
$ sudo apt-get install tensorrt

安裝 DeepStream
因為不選擇最新的版本
參考 https://docs.nvidia.com/metropolis/deepstream-archive.html
$ sudo apt install libssl1.0.0
$ sudo apt install libgstreamer1.0-0
$ sudo apt install gstreamer1.0-tools
$ sudo apt install gstreamer1.0-plugins-good
$ sudo apt install gstreamer1.0-plugins-bad
$ sudo apt install gstreamer1.0-plugins-ugly
$ sudo apt install gstreamer1.0-libav
$ sudo apt install libgstrtspserver-1.0-0
$ sudo apt install libjansson4
$ sudo apt install gcc
$ sudo apt install make
$ sudo apt install git
$ sudo apt install python3

$ cd /usr/bin
$ sudo ln -s python3 python
$ git clone https://github.com/edenhill/librdkafka.git
$ cd librdkafka
$ git reset --hard 7101c2310341ab3f4675fc565f64f0967e135a6a
$ ./configure
$ make
$ sudo make install
$ sudo mkdir -p /opt/nvidia/deepstream/deepstream-6.0/lib
$ sudo cp /usr/local/lib/librdkafka* /opt/nvidia/deepstream/deepstream-6.0/lib
下載 Deepstream 6.0 dGPU Debian package
https://developer.nvidia.com/deepstream-6.0_6.0.1-1_amd64deb
$ sudo apt-get install ./deepstream-6.0_6.0.1-1_amd64.deb
$ rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin
$ cd /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/
$ deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt

安裝  Nvidia Docker
參考 https://docs.docker.com/engine/install/ubuntu/
$ sudo apt-get update
$ sudo apt-get install ca-certificates
$ sudo apt-get install curl
$ sudo apt-get install gnupg
$ sudo apt-get install lsb-release
$ sudo mkdir -p /etc/apt/keyrings
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
$ echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
$ sudo apt-get update
$ sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
$ sudo docker run --rm hello-world

參考 https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
      && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
      && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
            sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
            sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
$ sudo apt-get update
$ sudo apt-get install -y nvidia-docker2
$ sudo systemctl restart docker
$ sudo groupadd docker
$ sudo usermod -a -G docker $USER
$ sudo reboot

安裝 NGC CLI
參考 https://ngc.nvidia.com/setup/installers/cli
$ wget --content-disposition https://ngc.nvidia.com/downloads/ngccli_linux.zip && \
  unzip ngccli_linux.zip && \
  chmod u+x ngc-cli/ngc
$ find ngc-cli/ -type f -exec md5sum {} + | LC_ALL=C sort | md5sum -c ngc-cli.md5
$ echo "export PATH=\"\$PATH:$(pwd)/ngc-cli\"" >> ~/.bash_profile && source ~/.bash_profile
$ ngc config set
# 直接 enter 即可
$ docker login nvcr.io
Username: $oauthtoken
Password: <Your API Key>

使用 Docker 安裝 TensorRT OSS
開啟 https://github.com/nvidia/TensorRT
切換至 Tags 8.0.1
$ git clone -b master https://github.com/nvidia/TensorRT TensorRT_OSS-8.0.1
$ cd TensorRT_OSS-8.0.1/
$ git describe --tags
8.2.0-EA-2-g96e2397
$ git tag -l
$ git branch -r
$ git checkout 8.0.1
$ git log -1
$ git describe --tags
8.0.1
$ git submodule update --init --recursive
$ vi docker/ubuntu-18.04.Dockerfile
修改下列一行
RUN cd /usr/local/bin && wget https://ngc.nvidia.com/downloads/ngccli_cat_linux.zip && \
  unzip ngccli_cat_linux.zip && chmod u+x ngc-cli/ngc && \
  rm ngccli_cat_linux.zip ngc-cli.md5 && echo "no-apikey\nascii\n" | ngc-cli/ngc config set

$ cat docker/ubuntu-18.04.Dockerfile | grep CUDA_VERSION
ARG CUDA_VERSION=11.3.1
$ ./docker/build.sh --file docker/ubuntu-18.04.Dockerfile --tag tensorrt-ubuntu18.04-cuda11.3 --cuda 11.3.1
$ ./docker/launch.sh --tag tensorrt-ubuntu18.04-cuda11.3 --gpus all
/workspace$ cd $TRT_OSSPATH
/workspace/TensorRT$ mkdir -p build && cd build
/workspace/TensorRT/build$ cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out
/workspace/TensorRT/build$ make -j$(nproc)
/workspace/TensorRT/build$ exit
$ mkdir backup
$ sudo mv /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.0.1 backup/
$ sudo cp build/out/libnvinfer_plugin.so.8.0.1 /usr/lib/x86_64-linux-gnu/
另外還要再安裝 TensorRT 8.2.1 給 Docker Ubuntu 20.04
將 8.0.1 改成 8.2.1, 18.04 改成 20.04, 11.3.1 改成 11.4.2, 但不需安裝

用 Docker 開發 DeepStream 6.0.1
參考 https://docs.nvidia.com/metropolis/deepstream/6.0.1/dev-guide/text/DS_docker_containers.html
$ docker pull nvcr.io/nvidia/deepstream:6.0.1-devel
$ xhost +
access control disabled, clients can connect from any host
$ sudo docker run --gpus all -it --rm --net=host \
  -v /tmp/.X11-unix:/tmp/.X11-unix -v /etc/localtime:/etc/localtime \
  -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.0 nvcr.io/nvidia/deepstream:6.0.1-devel
# update-alternatives --display cuda
# cat /etc/os-release
# cd samples/configs/deepstream-app
# deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt 
# cd /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-app
# export CUDA_VER=11.4
# make
# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/nvidia/deepstream/deepstream-6.0/lib/gst-plugins/
# export URI=rtsp://user:passwd@192.168.0.108:554/live1s2.sdp
# gst-launch-1.0 uridecodebin uri=$URI ! nvvideoconvert ! nveglglessink
# exit

用 Docker 開發 DeepStream 6.1
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_docker_containers.html
$ docker pull nvcr.io/nvidia/deepstream:6.1-devel
$ xhost +
access control disabled, clients can connect from any host
$ sudo docker run --gpus all -it --rm --net=host \
  -v /tmp/.X11-unix:/tmp/.X11-unix -v /etc/localtime:/etc/localtime \
  -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.1 nvcr.io/nvidia/deepstream:6.1-devel
# cd samples/configs/deepstream-app
# deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt 
# cd /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-app
# export CUDA_VER=11.6
# make
# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/nvidia/deepstream/deepstream-6.1/lib/gst-plugins/
# export URI=rtsp://user:passwd@192.168.0.108:554/live1s2.sdp
# gst-launch-1.0 uridecodebin uri=$URI ! nvvideoconvert ! nveglglessink
# exit

用 Docker DeepStream 6.0.1 測試 Integrate TAO model with DeepStream SDK
$ git clone https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps.git deepstream_tao_apps-tao3.0_ds6.0.1
$ sudo apt install gitk
$ cd deepstream_tao_apps-tao3.0_ds6.0.1/
$ git branch -r
$ git checkout release/tao3.0_ds6.0.1
$ git log -1
$ sudo docker run --gpus all -it --rm --net=host \
> -v /etc/localtime:/etc/localtime \
> -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY \
> -v /home/mark/Data/TensorRT/TensorRT_OSS-8.0.1/:/home/TensorRT \
> -v /home/mark/Data/DeepStream/deepstream_tap_apps/deepstream_tao_apps-tao3.0_ds6.0.1/:/home/deepstream_tao_apps \
> -w /opt/nvidia/deepstream/deepstream-6.0 nvcr.io/nvidia/deepstream:6.0.1-devel
# cd /home/deepstream_tao_apps/
# ./download_models.sh 
# export CUDA_VER=11.4
# make
# cp /home/TensorRT/build/out/libnvinfer_plugin.so.8.0.1 /usr/lib/x86_64-linux-gnu/
# ./apps/tao_detection/ds-tao-detection -c configs/frcnn_tao/pgie_frcnn_tao_config.txt -i /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.h264 -d
# cd /opt/nvidia/deepstream/deepstream-6.0/sources/gst-plugins/gst-nvdsvideotemplate/
# make
# cp libnvdsgst_videotemplate.so /opt/nvidia/deepstream/deepstream-6.0/lib/gst-plugins/
# rm -rf ~/.cache/gstreamer-1.0/
# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/nvidia/deepstream/deepstream-6.0/lib/cvcore_libs/
# cd /home/deepstream_tao_apps/apps/tao_others/
# make
# export URI=rtsp://user:passwd@192.168.0.108:554/live1s2.sdp
# cd deepstream-bodypose2d-app/
# ./deepstream-bodypose2d-app 3 ../../../configs/bodypose2d_tao/sample_bodypose2d_model_config.txt $URI ./body2dout
# cd ../deepstream-emotion-app/
# ./deepstream-emotion-app 3 ../../../configs/facial_tao/sample_faciallandmarks_config.txt $URI ./landmarks
# cd ../deepstream-faciallandmark-app/
# ./deepstream-faciallandmark-app 3 ../../../configs/facial_tao/sample_faciallandmarks_config.txt $URI ./landmarks
# cd ../deepstream-gaze-app/
# ./deepstream-gaze-app 3 ../../../configs/facial_tao/sample_faciallandmarks_config.txt $URI ./gazenet
# cd ../deepstream-gesture-app/
# ./deepstream-gesture-app 3 3 ../../../configs/bodypose2d_tao/sample_bodypose2d_model_config.txt $URI ./gesture
# cd ../deepstream-heartrate-app/
# ./deepstream-heartrate-app 3 $URI ./heartrate
# exit

用 Docker DeepStream 6.1 測試 Integrate TAO model with DeepStream SDK
$ git clone https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps.git deepstream_tao_apps-tao3.0_ds6.1ga
$ sudo apt install gitk
$ cd deepstream_tao_apps-tao3.0_ds6.1ga/
$ git branch -r
$ git checkout release/tao3.0_ds6.1ga
$ git log -1
$ sudo docker run --gpus all -it --rm --net=host \
> -v /etc/localtime:/etc/localtime \
> -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY \
> -v /home/mark/Data/TensorRT/TensorRT_OSS-8.2.1/:/home/TensorRT \
> -v /home/mark/Data/DeepStream/deepstream_tap_apps/deepstream_tao_apps-tao3.0_ds6.1ga/:/home/deepstream_tao_apps \
> -w /opt/nvidia/deepstream/deepstream-6.1 nvcr.io/nvidia/deepstream:6.1-devel
# cp /home/TensorRT/build/out/libnvinfer_plugin.so.8.2.1 /usr/lib/x86_64-linux-gnu/
# cd /usr/lib/x86_64-linux-gnu/
# rm libnvinfer_plugin.so.8
# ln -s libnvinfer_plugin.so.8.2.1 libnvinfer_plugin.so.8
# cd /home/deepstream_tao_apps/
# ./download_models.sh 
# export CUDA_VER=11.6
# make
# ./apps/tao_detection/ds-tao-detection -c configs/frcnn_tao/pgie_frcnn_tao_config_dgpu.txt -i /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_720p.h264 -d
# cd /opt/nvidia/deepstream/deepstream-6.1/sources/gst-plugins/gst-nvdsvideotemplate/
# make
# cp libnvdsgst_videotemplate.so /opt/nvidia/deepstream/deepstream-6.1/lib/gst-plugins/
# rm -rf ~/.cache/gstreamer-1.0/
# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/nvidia/deepstream/deepstream-6.1/lib/cvcore_libs/
# cd /home/deepstream_tao_apps/apps/tao_others/
# make
# export URI=rtsp://user:passwd@192.168.0.108:554/live1s2.sdp
# cd deepstream-bodypose2d-app/
# ./deepstream-bodypose2d-app 3 ../../../configs/bodypose2d_tao/sample_bodypose2d_model_config.txt 0 0 $URI ./body2dout
# cd ../deepstream-emotion-app/
# ./deepstream-emotion-app 3 ../../../configs/facial_tao/sample_faciallandmarks_config.txt $URI ./landmarks
# cd ../deepstream-faciallandmark-app/
# ./deepstream-faciallandmark-app 3 ../../../configs/facial_tao/sample_faciallandmarks_config.txt $URI ./landmarks
# cd ../deepstream-gaze-app/
# ./deepstream-gaze-app 3 ../../../configs/facial_tao/sample_faciallandmarks_config.txt $URI ./gazenet
# cd ../deepstream-gesture-app/
# ./deepstream-gesture-app 3 3 ../../../configs/bodypose2d_tao/sample_bodypose2d_model_config.txt $URI ./gesture
# cd ../deepstream-heartrate-app/
# ./deepstream-heartrate-app 3 $URI ./heartrate
# exit

安裝其他元件
參考 Ubuntu 18.04 重灌 上的 安裝 CMake 和 安裝 OpenCV


2022年4月20日 星期三

Build TensorRT OSS(Open Source Software)

參考 TensorRT Open Source Software


$ git clone -b master https://github.com/nvidia/TensorRT TensorRT_OSS
$ git describe --tags
8.2.0-EA-2-g96e2397
$ cd ..
$ mv TensorRT_OSS TensorRT_OSS-8.2.0 
$ cd TensorRT_OSS-8.2.0/
$ git submodule update --init --recursive
下列三個紅底黑字需一致
$ ./docker/build.sh --file docker/ubuntu-18.04.Dockerfile --tag tensorrt-ubuntu18.04-cuda11.4 --cuda 11.4.2
Building container:
> docker build -f docker/ubuntu-18.04.Dockerfile --build-arg CUDA_VERSION=11.4.2 --build-arg uid=1000 --build-arg gid=1000 --tag=tensorrt-ubuntu18.04-cuda11.4 .
Sending build context to Docker daemon  95.59MB
Step 1/31 : ARG CUDA_VERSION=11.4.2
Step 2/31 : ARG OS_VERSION=18.04
Step 3/31 : FROM nvidia/cuda:${CUDA_VERSION}-cudnn8-devel-ubuntu${OS_VERSION}
11.4.2-cudnn8-devel-ubuntu18.04: Pulling from nvidia/cuda
11323ed2c653: Already exists 
fc717811e279: Pull complete 
ccdba3c8eced: Pull complete 
2e5962ccde0c: Pull complete 
59fd9269dfdb: Pull complete 
ca8ef6b7b26f: Pull complete 
075319d1b973: Pull complete 
a7391b62d0c1: Pull complete 
ef615d2b7b27: Pull complete 
312d7b759a5a: Pull complete 
Digest: sha256:3ccdc2ec6ab45d9e902ac799eb4bfba7a39e2b3057962a95dd2b717493f8e252
Status: Downloaded newer image for nvidia/cuda:11.4.2-cudnn8-devel-ubuntu18.04
 ---> 58c89511016d
Step 4/31 : LABEL maintainer="NVIDIA CORPORATION"
 ---> Running in af3ded1c4476
Removing intermediate container af3ded1c4476
 ---> a72393f203bc
Step 5/31 : ENV TRT_VERSION 8.2.0.6
 ---> Running in 948c6eba13b7
Removing intermediate container 948c6eba13b7
 ---> 67dbe1482cfd

$ ./docker/launch.sh --tag tensorrt-ubuntu18.04-cuda11.4 --gpus all
trtuser@ae7dd87a158f:/workspace$ cd $TRT_OSSPATH
trtuser@ae7dd87a158f:/workspace/TensorRT$ mkdir -p build && cd build
trtuser@ae7dd87a158f:/workspace/TensorRT/build$ cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out
trtuser@ae7dd87a158f:/workspace/TensorRT/build$ make -j$(nproc)
trtuser@ae7dd87a158f:/workspace/TensorRT/build$ exit

$ sudo mv /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.2.3 backup/
$ sudo cp TensorRT_OSS-8.2.0/build/out/libnvinfer_plugin.so.8.2.0 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.8.2.3
$ sudo ldconfig


2022年3月2日 星期三

TensorFlow to TensorRT

$ pip install onnxruntime
$ pip install -U tf2onnx
python -m tf2onnx.convert \
--saved-model tensorflow-model-path \
--output output.onnx

2022年2月16日 星期三

yolo v4 to TensorRt

參考: tensorrt_demos

$ cd TensorRT
$ git clone https://github.com/jkjung-avt/tensorrt_demos.git
$ cd tensorrt_demos

安裝環境
$ cd yolo
$ ./install_pycuda.sh
$ pip3 install onnx==1.4.1
$ cd ../plugins
$ make

下載 yolo 的 weights 和 cfg
$ cd ../yolo
$ ./download_yolo.sh
轉換 weights 到 trt
$ python3 yolo_to_onnx.py -m yolov4-tiny-416
$ python3 onnx_to_tensorrt.py -m yolov4-tiny-416
測試
$ cd ..
$ python trt_yolo.py --image doc/dog_trt_yolov4_416.jpg -m yolov4-tiny-416

使用 int8
$ cd yolo
$ ln -s yolov4-tiny-416.cfg yolov4-tiny-int8-416.cfg
$ ln -s yolov4-tiny-416.onnx yolov4-tiny-int8-416.onnx
$ mkdir calib_images
# and copy our image to calib_images
$ python3 onnx_to_tensorrt.py -v --int8 -m yolov4-tiny-int8-416
會產生如下錯誤
[03/02/2022-15:22:07] [TRT] [V] 001_convolutional + 001_convolutional_bn Set Tactic Name: sm70_xmma_fprop_implicit_gemm_f16f16_f16f16_f16_nhwckrsc_nhwc_tilesize128x256x32_stage1_warpsize2x4x1_g1_tensor8x8x4_t1r3s3 Tactic: 46202665595848747
[03/02/2022-15:22:07] [TRT] [V] Deleting timing cache: 2020 entries, 504 hits
[03/02/2022-15:22:07] [TRT] [E] 1: Unexpected exception
ERROR: failed to build the TensorRT engine!
$ vi onnx_to_tensorrt.py
將 from calibrator import YOLOEntropyCalibrator 移到程式開頭

2021年6月17日 星期四

LPD LPR on DeepStream

參考
https://developer.nvidia.com/blog/creating-a-real-time-license-plate-detection-and-recognition-app
https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app
https://blogs.nvidia.com.tw/2021/05/12/creating-a-real-time-license-plate-detection-and-recognition-app/

依據 CUDA cuDNN TensorRT 的版本,下載 tlt-converter
https://developer.nvidia.com/tlt-get-started


完整程序
~/Data/DeepStream$ mkdir deepstream_lpr_app
~/Data/DeepStream$ cd deepstream_lpr_app/
~/Data/DeepStream/deepstream_lpr_app$ git clone https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app.git
~/Data/DeepStream/deepstream_lpr_app$ cd deepstream_lpr_app/
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app$ ./download_us.sh
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app$ ../../tlt-converter/cuda11.0_cudnn8.0_trt7.2/tlt-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 models/LP/LPR/us_lprnet_baseline18_deployable.etlt -t fp16 -e models/LP/LPR/lpr_us_onnx_b16.engine
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app$ make
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app$ cd deepstream-lpr-app/
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/deepstream-lpr-app$ cp dict_us.txt dict.txt
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/deepstream-lpr-app$ ./deepstream-lpr-app 1 2 0 /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.mp4 output.264
以上可以執行,並在第一次執行時產生 engine
也可以利用下列命令 tlt-converter 產生 engine
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/deepstream-lpr-app$ cd ../models/tlt_pretrained_models/trafficcamnet/
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/models/tlt_pretrained_models/trafficcamnet$ mv trafficnet_int8.txt trafficnet_int8.bin
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/models/tlt_pretrained_models/trafficcamnet$ ../../../../../tlt-converter/cuda11.0_cudnn8.0_trt7.2/tlt-converter -k tlt_encode -o output_bbox/BiasAdd,output_cov/Sigmoid -d 3,544,960 -t int8 -c trafficnet_int8.bin -e resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine resnet18_trafficcamnet_pruned.etlt
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/models/tlt_pretrained_models/trafficcamnet$ cd ../../LP/LPD/
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/models/LP/LPD$ ../../../../../tlt-converter/cuda11.0_cudnn8.0_trt7.2/tlt-converter -k nvidia_tlt -o output_bbox/BiasAdd,output_cov/Sigmoid -d 3,480,640 -t int8 -c usa_lpd_cal.bin -e usa_pruned.etlt_b16_gpu0_int8.engine usa_pruned.etlt
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/models/LP/LPD$ cd ../../../deepstream-lpr-app/
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/deepstream-lpr-app$ vi lpd_us_config.txt

不顯示
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/deepstream-lpr-app$ ./deepstream-lpr-app 1 2 0 /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.mp4 output.264
有顯示
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/deepstream-lpr-app$ ./deepstream-lpr-app 1 3 0 /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.mp4 output.264


重新訓練模型
使用 tlt-launcher
python3 -m venv --system-site-packages tlt
source ~/envs/tlt/bin/activate
(tlt) $ pip3 install nvidia-pyindex
(tlt) $ pip3 install nvidia-tlt
(tlt) $ cat ~/.tlt_mounts.json
{
    "Mounts": [
        {
            "source": "/home/username/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments",
            "destination": "/workspace/tlt-experiments"
        },
        {
            "source": "/home/username/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr",
            "destination": "/workspace/openalpr"
        }

    ]
}
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr$ git clone https://github.com/openalpr/benchmarks benchmarks
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr$ mkdir tlt-experiments
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr$ mkdir openalpr
重新訓練 LPD 模型
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr$ cd openalpr/
下載 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/release/tlt3.0/misc/dev_blog/LPDR/lpd/lpd_prepare_data.py
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ python3 lpd_prepare_data.py --input_dir ../benchmarks/endtoend/us --output_dir lpd --target_width 640 --target_height 480
下載 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/release/tlt3.0/misc/dev_blog/LPDR/lpd/SPECS_tfrecord.txt
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ tlt detectnet_v2 dataset_convert -d /workspace/openalpr/SPECS_tfrecord.txt -o /workspace/openalpr/lpd_tfrecord/lpd
下載 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/release/tlt3.0/misc/dev_blog/LPDR/lpd/SPECS_train.txt
下載預先訓練模型 (tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ ngc registry model download-version "nvidia/tlt_lpdnet:unpruned_v1.0"
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ mv tlt_lpdnet_vunpruned_v1.0/usa_unpruned.tlt .
訓練
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ tlt detectnet_v2 train -e /workspace/openalpr/SPECS_train.txt -r /workspace/openalpr/exp_unpruned -k nvidia_tlt
修剪
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ tlt detectnet_v2 prune -m /workspace/openalpr/exp_unpruned/weights/model.tlt -o /workspace/openalpr/exp_pruned/model_pruned_06.tlt -pth 0.6 -k nvidia_tlt
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ cp SPECS_train.txt SPECS_retrain.txt
修改 pretrained_model_file: "/workspace/openalpr/exp_pruned/model_pruned_06.tlt"
再次訓練
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ tlt detectnet_v2 train -e /workspace/openalpr/SPECS_retrain.txt -r /workspace/openalpr/exp_pruned -k nvidia_tlt
匯出
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ tlt detectnet_v2 export -m /workspace/openalpr/exp_unpruned/weights/model.tlt -o /workspace/openalpr/export/unpruned_model.etlt --cal_cache_file /workspace/openalpr/export/unpruned_calibration.bin -e /workspace/openalpr/SPECS_train.txt -k nvidia_tlt --cal_image_dir /workspace/openalpr/lpd/data/image --data_type int8 --batch_size 4 --batches 10 --engine_file /workspace/openalpr/export/unpruned_int8.trt
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ tlt detectnet_v2 export -m /workspace/openalpr/exp_pruned/weights/model.tlt -o /workspace/openalpr/export/pruned_model.etlt --cal_cache_file /workspace/openalpr/export/pruned_calibration.bin -e /workspace/openalpr/SPECS_train.txt -k nvidia_tlt --cal_image_dir /workspace/openalpr/lpd/data/image --data_type int8 --batch_size 4 --batches 10 --engine_file /workspace/openalpr/export/pruned_int8.trt
評估
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ tlt detectnet_v2 evaluate -m /workspace/openalpr/exp_unpruned/weights/model.tlt -k nvidia_tlt -e /workspace/openalpr/SPECS_train.txt
(tlt) mark@z390:~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ tlt detectnet_v2 evaluate -m /workspace/openalpr/exp_pruned/weights/model.tlt -k nvidia_tlt -e /workspace/openalpr/SPECS_train.txt
重新訓練 LPR 模型
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments$ mkdir lprnet
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments$ cd lprnet/
下載 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/release/tlt3.0/misc/dev_blog/LPDR/lpr/preprocess_openalpr_benchmark.py
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments/lprnet$ python3 preprocess_openalpr_benchmark.py --input_dir=../../benchmarks/endtoend/us --output_dir=../data/openalpr
下載 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/release/tlt3.0/misc/dev_blog/LPDR/lpr/tutorial_spec.txt
修改 tutorial_spec.txt
augmentation_config {
    output_width: 96
    output_height: 48
    output_channel: 3
    keep_original_prob: 0.3
    transform_prob: 0.5
    rotate_degree: 5
}
成為
augmentation_config {
    output_width: 96
    output_height: 48
    output_channel: 3
    keep_original_prob: 0.3
    max_rotate_degree: 5
    rotate_prob: 0.5
    blur_prob: 0.5
}

下載預先訓練模型
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments/lprnet$ ngc registry model download-version "nvidia/tlt_lprnet:trainable_v1.0"
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments/lprnet$ cp tlt_lprnet_vtrainable_v1.0/us_lprnet_baseline18_trainable.tlt .
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments/lprnet$ mkdir specs
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments/lprnet$ cp tlt_lprnet_vtrainable_v1.0/us_lp_characters.txt specs/
訓練模型
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments/lprnet$ tlt lprnet train -e /workspace/tlt-experiments/lprnet/tutorial_spec.txt -r /workspace/tlt-experiments/lprnet/ -k nvidia_tlt -m /workspace/tlt-experiments/lprnet/us_lprnet_baseline18_trainable.tlt --use_amp
匯出模型
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments/lprnet$ tlt lprnet export -m /workspace/tlt-experiments/lprnet/weights/lprnet_epoch-24.tlt -k nvidia_tlt -e /workspace/tlt-experiments/lprnet/tutorial_spec.txt
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments/lprnet$ ls weights/lprnet_epoch-24.etlt
評估
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments/lprnet$ tlt lprnet evaluate -m /workspace/tlt-experiments/lprnet/weights/lprnet_epoch-24.tlt -k nvidia_tlt -e /workspace/tlt-experiments/lprnet/tutorial_spec.txt

2021年1月11日 星期一

Ubuntu 18.04 重灌

依據 https://developer.nvidia.com/deepstream-getting-started
選擇各個版本

https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html

sudo apt-get install ssh
sudo apt install python3-pip
sudo ln -s /usr/bin/python3 /usr/bin/python

安裝 NVIDIA Driver
https://www.linuxbabe.com/ubuntu/install-nvidia-driver-ubuntu-18-04
sudo lshw -c display
sudo lshw -c video
sudo ubuntu-drivers devices
sudo ubuntu-drivers autoinstall
sudo reboot
sudo lshw -c display

安裝 CUDA Toolkit
https://developer.nvidia.com/cuda-downloads
請選擇 CUDA Toolkit 10.2
Linux/x86_64/Ubuntu/18.04/deb(local)
vi ~/.bashrc
export PATH=/usr/local/cuda-10.2:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64:$LD_LIBRARY_PATH

安裝 CUDNN
https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html
https://developer.nvidia.com/cudnn
選擇 cuDNN v8.0.5 for CUDA 10.2
選擇 cuDNN Library for Linux (x86)
tar -xzvf cudnn-10.2-linux-x64-v8.0.5.39.tgz
sudo cp cuda/include/cudnn*.h /usr/local/cuda/include
sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64
sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*
sudo ldconfig
/sbin/ldconfig.real: /usr/local/cuda-10.2/targets/x86_64-linux/liblibcudnn.so.8 is not a symbolic link
cd /usr/local/cuda-10.2/targets/x86_64-linux/lib
sudo rm liblibcudnn.so.8 liblibcudnn.so
sudo ln -s libcudnn.so.8.0.5 libcudnn.so.8
sudo ln -s libcudnn.so.8 libcudnn.so
sudo ldconfig

安裝 TensorRT
下載 TensorRT binary
https://developer.nvidia.com/nvidia-tensorrt-7x-download
選擇 TensorRT 7.1.3.4 for Ubuntu 18.04 and CUDA 10.2 TAR package
version="7.1.3.4"
os="Ubuntu-18.04"
arch=$(uname -m)
cuda="cuda-10.2"
cudnn="cudnn8.0"
tar -xvzf TensorRT-7.1.3.4.Ubuntu-18.04.x86_64-gnu.cuda-10.2.cudnn8.0.tar.gz
vi ~/.bashrc
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/Data/TensorRT/TensorRT-7.1.3.4/lib
source ~/.bashrc
cd TensorRt-7.1.3.4/python
sudo pip3 install tensorrt-7.1.3.4-cp36-none-linux_x86_64.whl
cd ../uff
sudo pip3 install uff-0.6.9-py2.py3-none-any.whl
cd ../graphsurgeon/
sudo pip3 install graphsurgeon-0.4.5-py2.py3-none-any.whl

安裝 GStreamer
https://yingrenn.blogspot.com/2020/09/gstreamer.html

安裝 DeepStream
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html#dgpu-setup-for-ubuntu
sudo apt install \
 libssl1.0.0 \
 libgstreamer1.0-0 \
 gstreamer1.0-tools \
 gstreamer1.0-plugins-good \
 gstreamer1.0-plugins-bad \
 gstreamer1.0-plugins-ugly \
 gstreamer1.0-libav \
 libgstrtspserver-1.0-0 \
 libgstrtspserver-1.0-dev \
 libjansson4
git clone https://github.com/edenhill/librdkafka.git
cd librdkafka
git reset --hard 7101c2310341ab3f4675fc565f64f0967e135a6a
./configure
make
sudo make install
sudo mkdir -p /opt/nvidia/deepstream/deepstream-5.0/lib
sudo cp /usr/local/lib/librdkafka* /opt/nvidia/deepstream/deepstream-5.0/lib
下載 DeepStream
https://developer.nvidia.com/assets/Deepstream/5.0/ga/secure/deepstream_sdk_5.0.1_x86_64.tbz2
sudo tar -xvf deepstream_sdk_v5.0.1_x86_64.tbz2 -C /
cd /opt/nvidia/deepstream/deepstream-5.0/
sudo ./install.sh
sudo ldconfig

安裝 CMake v3.13
wget http://www.cmake.org/files/v3.13/cmake-3.13.5.tar.gz
tar xpvf cmake-3.13.5.tar.gz cmake-3.13.5
cd cmake-3.13.5
sudo apt-get install zlib1g-dev
sudo apt-get install curl
sudo apt-get install libcurl3
sudo apt-get install libcurl4 libcurl4-openssl-dev
./bootstrap --system-curl
make -j$(nproc)
sudo make install

安裝 Docker
https://docs.docker.com/engine/install/ubuntu/
sudo apt-get update
sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88
sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

安裝 NVIDIA Docker
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker
curl https://get.docker.com | sh \
  && sudo systemctl start docker \
  && sudo systemctl enable docker
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
   && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
   && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-docker2
sudo systemctl restart docker
sudo groupadd docker
sudo usermod -a -G docker $USER
sudo reboot

若 sudo apt update 出現
W: Target CNF (stable/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list:52 and /etc/apt/sources.list.d/docker.list:1
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 4773BD5E130D1D45
sudo rm /etc/apt/sources.list.d/docker.list

安裝 TensorRT 7.1 OSS
https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html
https://github.com/NVIDIA/TensorRT/tree/master
從 master 切換到 release/7.1
下載 TensorRT OSS
TensorRT OSS: 包含 TensorRT plugins, Caffe 和 ONNX parsers 等
git clone -b master https://github.com/nvidia/TensorRT TensorRT
cd TensorRT
git submodule update --init --recursive
參考之前的 TensorRT binary
cd TensorRT-7.1.3.4
export TRT_RELEASE=`pwd`
cd $TRT_SOURCE
./docker/build.sh --file docker/ubuntu.Dockerfile --tag tensorrt-ubuntu --os 18.04 --cuda 10.2
./docker/launch.sh --tag tensorrt-ubuntu --gpus all --release $TRT_RELEASE --source $TRT_SOURCE
cd $TRT_SOURCE
mkdir -p build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_RELEASE/lib -DTRT_OUT_DIR=`pwd`/out
make -j$(nproc)
exit
mkdir backup
mv $TRT_RELEASE/targets/x86_64-linux-gnu/lib/libnvinfer_plugin.so.7.1.3 backup
cp $TRT_SOURCE/build/out/libnvinfer_plugin.so.7.2.1 $TRT_RELEASE/targets/x86_64-linux-gnu/lib/libnvinfer_plugin.so.7.1.3

安裝 OpenCV 4.4
git clone https://github.com/opencv/opencv.git opencv-4.4.0 -b 4.4.0 --depth 1
git clone https://github.com/opencv/opencv_contrib.git opencv_contrib-4.4.0 -b 4.4.0 --depth 1
sudo apt-get update
sudo add-apt-repository ppa:alex-p/tesseract-ocr
sudo apt install tesseract-ocr libtesseract-dev
sudo apt-get install libleptonica-dev
sudo apt-get install qt5-default
sudo apt-get install qtcreator
sudo apt-get install build-essential cmake unzip pkg-config
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libxvidcore-dev libx264-dev
sudo apt-get install libgtk-3-dev
sudo apt-get install libatlas-base-dev gfortran
sudo apt-get install libhdf5-dev
sudo apt-get install python3-dev
mkdir build; cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D WITH_TBB=ON \
-D BUILD_opencv_cudacodec=OFF \
-D ENABLE_FAST_MATH=1 \
-D CUDA_FAST_MATH=1 \
-D WITH_CUDA=ON \
-D WITH_CUBLAS=ON \
-D WITH_LIBV4L=ON \
-D BUILD_opencv_python3=ON \
-D BUILD_opencv_python2=OFF \
-D BUILD_opencv_java=OFF \
-D WITH_V4L=ON \
-D WITH_QT=ON \
-D WITH_OPENGL=ON \
-D WITH_GSTREAMER=ON \
-D WITH_GTK=ON \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D OPENCV_PC_FILE_NAME=opencv.pc \
-D OPENCV_ENABLE_NONFREE=OFF \
-D OPENCV_EXTRA_MODULES_PATH=/your_path_to/opencv/opencv_contrib-4.4.0/modules \
-D BUILD_EXAMPLES=ON \
-D WITH_CUDNN=ON \
-D CUDNN_VERSION="8.0.5" \
-D OPENCV_DNN_CUDA=ON \
-D CUDNN_INCLUDE_DIR=/usr/local/cuda/include \
-D CUDNN_LIBRARY=/usr/local/cuda/lib64/libcudnn.so.8.0.5 \
-D CUDA_ARCH_BIN=7.5 \
../opencv-4.4.0
#-D OPENCV_ENABLE_NONFREE=ON \
#-D OpenGL_GL_PREFERENCE=LEGACY \

make -j$(nproc)
sudo make install
sudo ldconfig
opencv_version -v

下載 Tesseract 的中文字 chi_tra.traineddata, chi_tra_vert.traineddate
https://github.com/tesseract-ocr/tessdata
放至 /usr/share/tesseract-ocr/4.00/tessdata

安裝 Xpra
https://www.xpra.org/trac/wiki/Building/Debian
sudo apt-get install libx11-dev libxtst-dev libxcomposite-dev libxdamage-dev \
 libxkbfile-dev python-all-dev
sudo apt-get install libgtk-3-dev python3-dev python3-cairo-dev python-gi-dev cython3
sudo apt-get install xauth x11-xkb-utils
sudo apt-get install libx264-dev libvpx-dev yasm
sudo apt-get install libnvidia-encode-440
sudo apt-get install libavformat-dev libavcodec-dev libswscale-dev
sudo apt-get install libturbojpeg-dev
sudo apt-get install libwebp-dev
sudo apt-get install uglifyjs brotli libjs-jquery libjs-jquery-ui gnome-backgrounds
sudo apt-get install python3-opengl python3-numpy python3-pil
sudo apt-get install python3-rencode python3-lz4 python3-dbus python3-cryptography \
 python3-netifaces python3-yaml
sudo apt-get install python3-setproctitle python3-xdg python3-pyinotify python3-opencv
sudo apt-get install libpam-dev quilt xserver-xorg-dev xutils-dev xvfb keyboard-configuration
sudo apt-get install python3-kerberos python3-gssapi
sudo apt-get install gstreamer1.0-pulseaudio gstreamer1.0-alsa \
 gstreamer1.0-plugins-base gstreamer1.0-plugins-good \
 gstreamer1.0-plugins-ugly
sudo apt-get install cups-filters cups-common cups-pdf python3-cups
sudo apt-get install openssh-client sshpass python3-paramiko
sudo apt-get install devscripts build-essential lintian debhelper
wget https://www.xpra.org/src/xpra-4.0.4.tar.xz
tar -xf xpra-4.0.6.tar.xz
cd xpra-4.0.6
vi setup.py
#!/usr/bin/env python3
sudo ./setup.py install

安裝 frp
https://yingrenn.blogspot.com/2020/03/frp.html
vi frps.ini
[common]
bind_port = 7000
dashboard_port = 7001
dashboard_user = user
dashboard_pwd = password

vi /etc/systemd/system/frps.service
[Unit]
Description=FRP Server Daemon

[Service]
Type=simple
ExecStartPre=-/usr/sbin/setcap cap_net_bind_service=+ep /home/mark/Data/frp/frp_0.34.3_linux_amd64/frps
ExecStart=/path_to_frp/frp_0.34.3_linux_amd64/frps -c /path_to_frp/frp_0.34.3_linux_amd64/frps.ini
Restart=always
RestartSec=20s
User=nobody
PermissionsStartOnly=true
LimitNOFILE=infinity

[Install]
WantedBy=multi-user.target

sudo systemctl start frps
sudo systemctl enable ftps

安裝 Gitea Tomcat Nginx
https://yingrenn.blogspot.com/2019/11/gitea-in-ubuntu.html
https://yingrenn.blogspot.com/search/label/tomcat

安裝 x11vnc
因為 Ubuntu 18.04 改 LightDM 到 GDM3, 所以要在 console login 後,才能動作
sudo apt install x11vnc
sudo x11vnc -storepasswd
sudo chown mark.mark ~/.vnc/passwd
sudo vi /etc/systemd/system/x11vnc.service
# Description: Custom Service Unit file
# File: /etc/systemd/system/x11vnc.service
[Unit]
Description="x11vnc"
Requires=display-manager.service
After=display-manager.service

[Service]
ExecStart=/usr/bin/x11vnc -loop -nopw -xkb -repeat -noxrecord -noxfixes -noxdamage -forever -rfbport 5900 -display :1 -auth /run/user/1000/gdm/Xauthority -rfbauth /home/mark/.vnc/passwd
ExecStop=/usr/bin/killall x11vnc
Restart=on-failure
RestartSec=2

[Install]
WantedBy=multi-user.target

安裝 VirtualBox
https://www.virtualbox.org/wiki/Downloads
到官網下載
virtualbox-6.1_6.1.16-140961~Ubuntu~bionic_amd64.deb
Oracle_VM_VirtualBox_Extension_Pack-6.1.16.vbox-extpack
sudo dpkg -i virtualbox-6.1_6.1.16-140961~Ubuntu~bionic_amd64.deb
sudo groupadd win10disk
sudo usermod -a -G win10disk youruser
sudo udevadm info /dev/sdX | grep UUID
E: ID_PART_TABLE_UUID=01234567-89ab-cdef-0123-456789abcde
vi /etc/udev/rules.d/99-win10disk.rules
ENV{ID_PART_TABLE_UUID}=="1234567-89ab-cdef-0123-456789abcde", GROUP="win10disk"
ls -l /dev/sdb
brw-rw---- 1 root win10disk 8, 16 Nov 4 23:33 /dev/sdb
VBoxManage internalcommands createrawvmdk -filename .VirtualBox/Crucial1T.vmdk -rawdisk /dev/sda
sudo usermod -a -G vboxusers youruser
拷貝舊的 VM, 修改 win10.vbox, win10.vbox-prev 內的 uuid
與 .VirtualBox/Crucial1T.vmdk 的 uuid 一致
開啟 VirtualBox
File/Preferences...
Extensions 按 +
選擇剛下載的 Oracle_VM_VirtualBox_Extension_Pack-6.1.16.vbox-extpack


只有登入畫面無法使用滑鼠和鍵盤,其餘正常
sudo apt-get instll xserver-xorg-input-all