Nvidia TLT(Transfer Learning Toolkit) 改版變成 Nvidia TAO(Train, Adapt, and Optimize)
$ curl https://get.docker.com | sh \
&& sudo systemctl --now enable docker
$ sudo groupadd docker
$ sudo usermod -aG docker $USER
$ docker run hellow-world
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
$ sudo apt-get update
$ sudo apt-get install -y nvidia-docker2
$ sudo systemctl restart docker
$ sudo docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi
$ wget -O ngccli_linux.zip https://ngc.nvidia.com/downloads/ngccli_linux.zip && unzip -o ngccli_linux.zip && chmod u+x ngc
$ md5sum -c ngc.md5
$ echo "export PATH=\"\$PATH:$(pwd)\"" >> ~/.bash_profile && source ~/.bash_profile
$ ngc config set
$ docker login nvcr.io
Username: $oauthtoken
Password: YOUR_NGC_API_KEY
$ ngc registry model list
$ ngc registry image list
$ pip3 install virtualenv virtualenvwrapper
$ mkdir $HOME/.virtualenvs
$ vi ~/.bashrc
export WORKON_HOME=$HOME/.virtualenvs
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
source /home/mark/.local/bin/virtualenvwrapper.sh
$ source ~/.bashrc
$ mkvirtualenv tao -p /usr/bin/python3
$ deactivate
$ workon tao
$ pip3 install nvidia-pyindex
$ pip3 install nvidia-tao
$ pip3 install jupyter
$ tao info
$ cd Data/tao/
# 下載 使用 WGET
$ wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/tao/cv_samples/versions/v1.3.0/zip -O cv_samples_v1.3.0.zip
$ unzip -u cv_samples_v1.3.0.zip -d ./cv_samples_v1.3.0 && rm -rf cv_samples_v1.3.0.zip && cd ./cv_samples_v1.3.0
# 下載 使用 CLI
$ ngc registry resource download-version "nvidia/tao/cv_samples:v1.3.0"
$ cd ./cv_samples_v1.3.0
# 啟動 jupyter
$ jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root
# 開啟網頁 http://0.0.0.0:8888/tree
以 http://0.0.0.0:8888/notebooks/detectnet_v2/detectnet_v2.ipynb 為例
並參考 TAO 命令及參數文件 Object Detection/DetectNet_v2
修改
os.environ["LOCAL_PROJECT_DIR"] = FIXME
os.environ["LOCAL_PROJECT_DIR"] = "your_host_path_to/cv_samples_vv1.3.0"
會自動產生 ~/.tao_mounts.json
安裝 TAO launcher
!pip3 install nvidia-pyindex
!pip3 install nvidia-tao
下載 kitti object detection 資料集
到 http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=2d 網站
申請帳號,並下載
Download left color images of object data set (12 GB) 左眼看到的景
Download training labels of object data set (5 MB)
產生 Tfrecords
下載並安裝 NGC CLI
使用 ngc 下載 pretrained_detectnet_v2:resnet18
train evaluate prune (re)train evaluate inference
export 產生 resnet18_detector.etlt
Int8 Optimization (效果不好,請改用 QAT)
calibration_tensorfile 產生 calibration.tensor
export 產生 calibration.bin resnet18_detector.etlt resnet18_detector.trt.int8(可用於 inference)
converter 產生 resnet18_detector.trt(可用於 inference)
inference (效果不好,請改用 QAT)
QAT 需要重新 train
train evaluate export evaluate inference
列出 tao task
$ tao list
刪除 tao task
$ tao stop --container_id 40e28a67c2
inference config file
inferencer_config {
tensorrt_config {
etlt_model: "/workspace/tao-experiments/detectnet_v2/experiment_dir_final/resnet18_detector.etlt"
save_engine: true
trt_engine: "/workspace/tao-experiments/detectnet_v2/experiment_dir_final/resnet18_detector.trt"
calibrator_config {
calibration_cache: "/workspace/tao-experiments/detectnet_v2/experiment_dir_final/calibration.bin"
}
}
}
若是 trt_engine 檔案尚未產生,會由 etlt_model 產生 trt_engine
若是已經有 trt_engine,直接使用
沒有留言:
張貼留言