網頁

2023年7月18日 星期二

gstreamer fpsdisplaysink videorate

$ export URI=rtsp://root:A1234567@192.168.112.202:554/live1s1.sdp
$ export GST_DEBUG=fpsdisplaysink:5

利用 videotestsrc 測試 fpsdisplaysink
$ gst-launch-1.0 videotestsrc ! 'video/x-raw,width=1280,height=720,framerate=60/1' ! videoconvert ! fpsdisplaysink text-overlay=true

fpsdisplaysink 不使用 text-overlay
$ gst-launch-1.0 rtspsrc location=$URI protocols=tcp+udp ! application/x-rtp, media=video ! decodebin  ! nvvideoconvert ! nvegltransform ! fpsdisplaysink text-overlay=0 video-sink=nveglglessink

fpsdisplaysink 使用 text-overlay
$ gst-launch-1.0 rtspsrc location=$URI protocols=tcp+udp ! application/x-rtp, media=video ! decodebin  ! nvvideoconvert ! fpsdisplaysink text-overlay=1 video-sink=autovideosink

利用 videorate 設定 framerate
$ gst-launch-1.0 rtspsrc location=$URI protocols=tcp+udp ! application/x-rtp, media=video ! decodebin ! nvvideoconvert ! videorate ! video/x-raw,framerate=60/1 ! nvvideoconvert ! fpsdisplaysink text-overlay=1 video-sink=autovideosink

加入 rtpjitterbuffer,  但好像沒用
$ gst-launch-1.0 rtspsrc location=$URI protocols=tcp+udp ! application/x-rtp, media=video ! rtpjitterbuffer latency=0 ! decodebin  ! nvvideoconvert ! fpsdisplaysink text-overlay=1 video-sink=autovideosink

不顯示, 但從 log 中可查看出 fps
$ gst-launch-1.0 rtspsrc location=$URI protocols=tcp+udp ! application/x-rtp, media=video ! decodebin  ! nvvideoconvert ! fpsdisplaysink text-overlay=0 video-sink=fakesink
輸出
:00:02.590692019 1665816 0xffff6001d700 DEBUG         fpsdisplaysink fpsdisplaysink.c:372:display_current_fps:<fpsdisplaysink0> Updated max-fps to 1.102534
0:00:02.590778644 1665816 0xffff6001d700 DEBUG         fpsdisplaysink fpsdisplaysink.c:376:display_current_fps:<fpsdisplaysink0> Updated min-fps to 1.102534


$ gst-launch-1.0 rtspsrc location=$URI protocols=tcp+udp ! application/x-rtp, media=video ! decodebin  ! nvvideoconvert ! videorate ! video/x-raw,framerate=60/1 ! nvvideoconvert ! fpsdisplaysink text-overlay=0 video-sink=fakesink

2023年7月14日 星期五

ubuntu 多網卡之 default route

$ cd /etc/NetworkManager/system-connections/
編輯相對網卡的檔案, 在 [ipv4] 下, 加入 route
$ sudo vi 'Wired connection 1.nmconnection'
[ipv4]
route1=0.0.0.0/0,192.168.0.254,1

2023年7月12日 星期三

install pytorch in ubuntu


$ python3 -m venv pytorch
$ source pytorch/bin/activate
$ pip3 install --upgrade --no-cache-dir pip
$ sudo update-alternatives --config cuda
$ pip3 install torch==1.10.2+cu113 torchvision==0.11.3+cu113 torchaudio==0.10.2+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html

2023年7月5日 星期三

DeepStream 之 nvinfer(primary mode) 執行 classifier, 使用在 ROI 上

試了很久, 無法利用 nvdspreprocess 的 ROI 放在 nvinfer 之前
發現直接使用 nvvideoconvert 的 ROI 可以正常運作

參數設定 src-crop="left:top:width:height", 如
g_object_set(G_OBJECT(pre_proc), "src-crop", "50:0:320:240", NULL);


2023年7月3日 星期一

YOLOv8 and TensorRT

參考 YOLOv8 GitHub 官網

1. 下載 DeepStream-Yolo, Ultralytics YOLOv8
git clone https://github.com/marcoslucianops/DeepStream-Yolo.git
git clone https://github.com/ultralytics/ultralytics.git /mnt/Data/DeepStream/DeepStream-Yolo/ultralytics

2. 建立 deepstream_yolo docker container
docker_run.sh
xhost +
docker run --name='deepstream_yolo' --gpus all -it --net=host --privileged \
  -v /tmp/.X11-unix:/tmp/.X11-unix \
  -v /etc/localtime:/etc/localtime \
  -v /mnt/Data/DeepStream/DeepStream-Yolo/DeepStream-Yolo:/home/DeepStream-Yolo \
  -v /mnt/Data/DeepStream/DeepStream-Yolo/ultralytics:/home/ultralytics \
  -v /mnt/Data/DeepStream/DeepStream-Yolo/read_me:/home/read_me \
  -v /mnt/Data/DeepStream/DeepStream-Yolo/datasets:/home/datasets \
  -v /mnt/CT1000SSD/ImageData/Light:/home/Light \
  -e DISPLAY=$DISPLAY \
  -w /home/read_me \
  nvcr.io/nvidia/deepstream:6.2-devel
  
3. 在 Docker 內, 安裝 DeepStream-Yolo
apt-get install build-essential
/opt/nvidia/deepstream/deepstream/user_additional_install.sh
cd /home/DeepStream-Yolo
CUDA_VER=11.8 make -C nvdsinfer_custom_impl_Yolo

4. 在 Docker 內, 安裝 Ultralytics YOLOv8
#python3 -m pip install --upgrade pip
pip3 install --upgrade pip
pip3 install protobuf numpy
cd /home/ultralytics
#pip install -e .
pip3 install -r requirements.txt
python3 setup.py install
pip3 install onnx onnxsim onnxruntime

5. 在 Docker 內, 下載,轉換,測試 yolov8s.pt, yolov8s.pt 模型
cd /home/ultralytics
cp /home/DeepStream-Yolo/utils/export_yoloV8.py /home/ultralytics
wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt
wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt
python3 export_yoloV8.py -w yolov8s.pt --dynamic
python3 export_yoloV8.py -w yolov8n.pt --dynamic
cp yolov8s.onnx labels.txt /home/DeepStream-Yolo
cp yolov8n.onnx labels.txt /home/DeepStream-Yolo

6. 移除 deepstream_yolo container
$ docker container rm deepstream_yolo

7. 重新進入 Docker
docker_attach.sh
xhost +
docker start deepstream_yolo
docker attach deepstream_yolo

8. 轉換模型格式為 onnx
yolov8n.py
from ultralytics import YOLO

# Load a model
model = YOLO("yolov8n.yaml")  # build a new model from scratch
model = YOLO("yolov8n.pt")  # load a pretrained model (recommended for training)

# Use the model
model.train(data="coco128.yaml", epochs=3)  # train the model
metrics = model.val()  # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg")  # predict on an image
path = model.export(format="onnx")  # export the model to ONNX format

執行 python3 yolov8n.py 出現下列錯誤
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
修正方式
$ sudo systemctl stop docker
取得 container id
$ docker inspect deepstream_yolo | grep Id
"Id": "???????"
編輯 container 的 ShmSize
$ sudo vi /var/lib/docker/containers/your_container_id/hostconfig.json
"ShmSize":8589934592
$ sudo systemctl restart docker
$ ./docker_attach.sh

9. 在 DeepStream 中測試 onnx 模型
# cd /home/DeepStream-Yolo

# vi config_infer_primary_yoloV8.txt
onnx-file=yolov8s.onnx
onnx-file=yolov8n.onnx

# vi deepstream_app_config.txt
uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
uri=rtsp://root:A1234567@192.168.0.107:554/live1s1.sdp
live-source=0
live-source=1
config-file=config_infer_primary.txt
config-file=config_infer_primary_yoloV8.txt
file-loop=0
file-loop=1

# deepstream-app -c deepstream_app_config.txt

10. 準備自己的圖形資料, PASCAL VOC(LabelImg 產生的 xml) 格式轉換成 txt
prepare_detect.py
import cv2
import os
import random
import re
import xml.etree.ElementTree as ET

import numpy as np

LIGHT_CLASSES_LIST = [
    'forward_right',
    'others',
    'red',
    'red_left',
    'yellow',
    ]
        
def save_false_positives(img_org, iName, xName, tag, classIdx, 
        clip_x0, clip_y0, clip_x1, clip_y1):
    img_new = img_org[clip_y0:clip_y1, clip_x0:clip_x1]
    fPath, fName = os.path.split(iName)
    fName, fExt = os.path.splitext(fName)
    fName = fName + tag + fExt
    rndPaths = ['train', 'val', 'test']
    rndPath = random.choices(rndPaths, weights=(8,1,1))[0]
    iName = os.path.join('/home/datasets/Light/images', rndPath, fName)
    cv2.imwrite(iName, img_new)
        
def convert_box(size, box):
    dw, dh = 1. / size[0], 1. / size[1]
    x, y, w, h = (box[0] + box[1]) / 2.0 - 1, (box[2] + box[3]) / 2.0 - 1, box[1] - box[0], box[3] - box[2]
    return x * dw, y * dh, w * dw, h * dh
          
def save_file(img_org, iName, xName, tag, classIdx, 
        p0x, p0y, p1x, p1y, p2x, p2y, p3x, p3y, 
        img_w, img_h, xmin, ymin, xmax, ymax,
        clip_x0, clip_y0, clip_x1, clip_y1):
    img_new = img_org[clip_y0:clip_y1, clip_x0:clip_x1]
    fPath, fName = os.path.split(iName)
    fName, fExt = os.path.splitext(fName)
    fName = fName + tag + fExt
    rndPaths = ['train', 'val', 'test']
    rndPath = random.choices(rndPaths, weights=(8,1,1))[0]
    iName = os.path.join('/home/datasets/Light/images', rndPath, fName)
    cv2.imwrite(iName, img_new)
    
    w = clip_x1 - clip_x0
    h = clip_y1 - clip_y0
    xmin = xmin - clip_x0
    ymin = ymin - clip_y0
    xmax = xmax - clip_x0
    ymax = ymax - clip_y0
    bb = convert_box((w, h), (xmin, xmax, ymin, ymax))
    fPath, fName = os.path.split(xName)
    fName, fExt = os.path.splitext(fName)
    fName = fName + tag + '.txt'
    tName = os.path.join('/home/datasets/Light/labels', rndPath, fName)
    with open(tName, 'w') as f:
        f.write(" ".join([str(a) for a in (classIdx, *bb)]) + '\n')
        
def gen_img_yolo(iName, xName):
    tree = ET.parse(open(xName))
    root = tree.getroot()
    img_w = int(root.find('size').find('width').text)
    img_h = int(root.find('size').find('height').text)
    for idx, object in enumerate(root.findall('object')):
        name = object.find('name').text
        classIdx = LIGHT_CLASSES_LIST.index(name)
        #print(classIdx, name)
        bndbox = object.find('bndbox')
        p0x = int(bndbox.find('p0x').text)
        p0y = int(bndbox.find('p0y').text)
        p1x = int(bndbox.find('p1x').text)
        p1y = int(bndbox.find('p1y').text)
        p2x = int(bndbox.find('p2x').text)
        p2y = int(bndbox.find('p2y').text)
        p3x = int(bndbox.find('p3x').text)
        p3y = int(bndbox.find('p3y').text)
        xmin = int(bndbox.find('xmin').text)
        ymin = int(bndbox.find('ymin').text)
        xmax = int(bndbox.find('xmax').text)
        ymax = int(bndbox.find('ymax').text)
        if xmin != p0x or xmin != p3x or ymin != p0y or ymin != p1y or \
                xmax != p1x or xmax != p2x or ymax != p2y or ymax != p3y:
            print('error:bndbox', xName)
            exit()
        if idx > 0:
            print('error:object', xName)
            exit()
    img_org = cv2.imread(iName)
    if img_org.shape[0] != img_h or img_org.shape[1] != img_w:
        print(img_org.shape, (img_h, img_w))
        exit()
    img = np.copy(img_org)

    clip_x0 = random.randrange(0, int(xmin*0.5))
    clip_y0 = random.randrange(0, int(ymin*0.5))
    clip_x1 = random.randrange(int(xmax + (img_w-xmax)*0.5), img_w+1)
    clip_y1 = random.randrange(int(ymax + (img_h-ymax)*0.5), img_h+1)
    save_file(img_org, iName, xName, '', classIdx, 
            p0x, p0y, p1x, p1y, p2x, p2y, p3x, p3y, 
            img_w, img_h, xmin, ymin, xmax, ymax,
            clip_x0, clip_y0, clip_x1, clip_y1)
    ratio = (xmax - xmin) / img_w
    if ratio < 0.3:
        clip_x0 = random.randrange(int(xmin*0.3), int(xmin*0.8))
        clip_y0 = random.randrange(int(ymin*0.3), int(ymin*0.8))
        clip_x1 = random.randrange(int(xmax + (img_w-xmax)*0.2), int(xmax + (img_w-xmax)*0.7))
        clip_y1 = random.randrange(int(ymax + (img_h-ymax)*0.2), int(ymax + (img_h-ymax)*0.7))
        save_file(img_org, iName, xName, '_a', classIdx, 
                p0x, p0y, p1x, p1y, p2x, p2y, p3x, p3y, 
                img_w, img_h, xmin, ymin, xmax, ymax,
                clip_x0, clip_y0, clip_x1, clip_y1)
        clip_x0 = random.randrange(int(xmin*0.5), int(xmin*0.9))
        clip_y0 = random.randrange(int(ymin*0.5), int(ymin*0.9))
        clip_x1 = random.randrange(int(xmax + (img_w-xmax)*0.1), int(xmax + (img_w-xmax)*0.5))
        clip_y1 = random.randrange(int(ymax + (img_h-ymax)*0.1), int(ymax + (img_h-ymax)*0.5))
        save_file(img_org, iName, xName, '_b', classIdx, 
                p0x, p0y, p1x, p1y, p2x, p2y, p3x, p3y, 
                img_w, img_h, xmin, ymin, xmax, ymax,
                clip_x0, clip_y0, clip_x1, clip_y1)
        if xmin > (img_w - xmax):
            if ymin > (img_h - ymax):
                clip_x0 = random.randrange(0, int(xmin*0.8))
                clip_y0 = random.randrange(0, int(ymin*0.8))
                clip_x1 = random.randrange(int(xmin), int(xmin+(xmax-xmin)*0.8))
                clip_y1 = random.randrange(int(ymin), int(ymin+(ymax-ymin)*0.8))
                root.remove(object)
                save_false_positives(img_org, iName, xName, '_f0', classIdx, 
                        clip_x0, clip_y0, clip_x1, clip_y1)
            else:
                clip_x0 = random.randrange(0, int(xmin*0.8))
                clip_y0 = random.randrange(int(ymin+(ymax-ymin)*0.2), int(ymax))
                clip_x1 = random.randrange(int(xmin), int(xmin + (xmax-xmin)*0.8))
                clip_y1 = random.randrange(int(ymax+(img_h-ymax)*0.2), img_h)
                root.remove(object)
                save_false_positives(img_org, iName, xName, '_f1', classIdx, 
                        clip_x0, clip_y0, clip_x1, clip_y1)
        else:
            if ymin > (img_h - ymax):
                clip_x0 = random.randrange(int(xmin+(xmax-xmin)*0.2), int(xmax))
                clip_y0 = random.randrange(0, int(ymin*0.8))
                clip_x1 = random.randrange(int(xmax + (img_w-xmax)*0.2), img_w)
                clip_y1 = random.randrange(int(ymin), int(ymin+(ymax-ymin)*0.8))
                root.remove(object)
                save_false_positives(img_org, iName, xName, '_f2', classIdx, 
                        clip_x0, clip_y0, clip_x1, clip_y1)
            else:
                clip_x0 = random.randrange(int(xmin+(xmax-xmin)*0.2), int(xmax))
                clip_y0 = random.randrange(int(ymin+(ymax-ymin)*0.2), int(ymax))
                clip_x1 = random.randrange(int(xmax + (img_w-xmax)*0.2), img_w)
                clip_y1 = random.randrange(int(ymax+(img_h-ymax)*0.2), img_h)
                root.remove(object)
                save_false_positives(img_org, iName, xName, '_f3', classIdx, 
                        clip_x0, clip_y0, clip_x1, clip_y1)
    elif ratio < 0.7:
        clip_x0 = random.randrange(int(xmin*0.1), int(xmin*0.7))
        clip_y0 = random.randrange(int(ymin*0.1), int(ymin*0.7))
        clip_x1 = random.randrange(int(xmax + (img_w-xmax)*0.3), int(xmax + (img_w-xmax)*0.9))
        clip_y1 = random.randrange(int(ymax + (img_h-ymax)*0.3), int(ymax + (img_h-ymax)*0.9))
        save_file(img_org, iName, xName, '_c', classIdx, 
                p0x, p0y, p1x, p1y, p2x, p2y, p3x, p3y, 
                img_w, img_h, xmin, ymin, xmax, ymax,
                clip_x0, clip_y0, clip_x1, clip_y1)
        if xmin > (img_w - xmax):
            if ymin > (img_h - ymax):
                clip_x0 = random.randrange(0, int(xmin*0.8))
                clip_y0 = random.randrange(0, int(ymin*0.8))
                clip_x1 = random.randrange(int(xmin), int(xmin+(xmax-xmin)*0.8))
                clip_y1 = random.randrange(int(ymin), int(ymin+(ymax-ymin)*0.8))
                root.remove(object)
                save_false_positives(img_org, iName, xName, '_f4', classIdx, 
                        clip_x0, clip_y0, clip_x1, clip_y1)
            else:
                clip_x0 = random.randrange(0, int(xmin*0.8))
                clip_y0 = random.randrange(int(ymin+(ymax-ymin)*0.2), int(ymax))
                clip_x1 = random.randrange(int(xmin), int(xmin + (xmax-xmin)*0.8))
                clip_y1 = random.randrange(int(ymax+(img_h-ymax)*0.2), img_h)
                root.remove(object)
                save_false_positives(img_org, iName, xName, '_f5', classIdx, 
                        clip_x0, clip_y0, clip_x1, clip_y1)
        else:
            if ymin > (img_h - ymax):
                clip_x0 = random.randrange(int(xmin+(xmax-xmin)*0.2), int(xmax))
                clip_y0 = random.randrange(0, int(ymin*0.8))
                clip_x1 = random.randrange(int(xmax + (img_w-xmax)*0.2), img_w)
                clip_y1 = random.randrange(int(ymin), int(ymin+(ymax-ymin)*0.8))
                root.remove(object)
                save_false_positives(img_org, iName, xName, '_f6', classIdx, 
                        clip_x0, clip_y0, clip_x1, clip_y1)
            else:
                clip_x0 = random.randrange(int(xmin+(xmax-xmin)*0.2), int(xmax))
                clip_y0 = random.randrange(int(ymin+(ymax-ymin)*0.2), int(ymax))
                clip_x1 = random.randrange(int(xmax + (img_w-xmax)*0.2), img_w)
                clip_y1 = random.randrange(int(ymax+(img_h-ymax)*0.2), img_h)
                root.remove(object)
                save_false_positives(img_org, iName, xName, '_f7', classIdx, 
                        clip_x0, clip_y0, clip_x1, clip_y1)
    elif ratio < 1.0:
        pass
    return

def recursive_folder(path):
    files = os.listdir(path)
    files.sort()
    for file in files:
        fullName = os.path.join(path, file)
        if os.path.isfile(fullName):
            fPath, fName = os.path.split(fullName)
            fName, fExt = os.path.splitext(fName)
            if fExt in ('.jpg'):
                xPath = fPath + '.xml'
                xName = fName + '.xml'
                xFName = os.path.join(xPath, xName)
                if os.path.isfile(xFName):
                    gen_img_yolo(fullName, xFName)
                else:
                    print(xFName)
        else:
            recursive_folder(fullName)

def main():
    recursive_folder('/home/Light')

if __name__ == '__main__':
    main()

11. 訓練自己的模型
from ultralytics import YOLO

# Load a model
model = YOLO('yolov8n.pt')  # load a pretrained model (recommended for training)

# Train the model
model.train(data='VOC.yaml', epochs=100, imgsz=640)

12. 查詢 onnx 模型的輸出輸入層
import onnx
model = onnx.load('yolov8n.onnx')
g_in = model.graph.input
g_out = model.graph.output