網頁

2021年6月30日 星期三

nvidia driver on ubuntu

$ lspci -k | grep -A 2 -i "VGA"
$ ubuntu-drivers devices
$ sudo ubuntu-drivers autoinstall

不建議下載 nvidia 網站的 driver


CUDNN 安裝

顯卡 驅動 CUDA CUDNN 版本關係
https://docs.nvidia.com/deeplearning/cudnn/support-matrix/index.html

安裝
https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html

2021年6月25日 星期五

ubuntu 換接顯示器 intel nvidia

圖形模式選擇
$ nvidia-settings
開啟 NVIDIA X Server Settings
選擇左邊 Tab 之 PRIME Profiles
右邊 Tab 出現
NVIDIA (Performance Mode)
NVIDIA On-Demand
Intel (Prower Saving Mode)
三種選項

命令模式選擇
$ sudo prime-select -?
$ sudo prime-select query
$ sudo prime-select on-deamand

開機直接進入圖形介面
$ sudo systemctl set-default graphical.target
開機直接進入文字介面
$ sudo systemctl set-default multi-user.target

Console 切換 Ctrl + Alt + Fn
Intel 上會顯示文字模式的 login:
Nvidia 上只有 Ctrl + Alt + F1 的圖形模式


2021年6月17日 星期四

LPD LPR on DeepStream

參考
https://developer.nvidia.com/blog/creating-a-real-time-license-plate-detection-and-recognition-app
https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app
https://blogs.nvidia.com.tw/2021/05/12/creating-a-real-time-license-plate-detection-and-recognition-app/

依據 CUDA cuDNN TensorRT 的版本,下載 tlt-converter
https://developer.nvidia.com/tlt-get-started


完整程序
~/Data/DeepStream$ mkdir deepstream_lpr_app
~/Data/DeepStream$ cd deepstream_lpr_app/
~/Data/DeepStream/deepstream_lpr_app$ git clone https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app.git
~/Data/DeepStream/deepstream_lpr_app$ cd deepstream_lpr_app/
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app$ ./download_us.sh
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app$ ../../tlt-converter/cuda11.0_cudnn8.0_trt7.2/tlt-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 models/LP/LPR/us_lprnet_baseline18_deployable.etlt -t fp16 -e models/LP/LPR/lpr_us_onnx_b16.engine
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app$ make
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app$ cd deepstream-lpr-app/
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/deepstream-lpr-app$ cp dict_us.txt dict.txt
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/deepstream-lpr-app$ ./deepstream-lpr-app 1 2 0 /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.mp4 output.264
以上可以執行,並在第一次執行時產生 engine
也可以利用下列命令 tlt-converter 產生 engine
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/deepstream-lpr-app$ cd ../models/tlt_pretrained_models/trafficcamnet/
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/models/tlt_pretrained_models/trafficcamnet$ mv trafficnet_int8.txt trafficnet_int8.bin
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/models/tlt_pretrained_models/trafficcamnet$ ../../../../../tlt-converter/cuda11.0_cudnn8.0_trt7.2/tlt-converter -k tlt_encode -o output_bbox/BiasAdd,output_cov/Sigmoid -d 3,544,960 -t int8 -c trafficnet_int8.bin -e resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine resnet18_trafficcamnet_pruned.etlt
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/models/tlt_pretrained_models/trafficcamnet$ cd ../../LP/LPD/
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/models/LP/LPD$ ../../../../../tlt-converter/cuda11.0_cudnn8.0_trt7.2/tlt-converter -k nvidia_tlt -o output_bbox/BiasAdd,output_cov/Sigmoid -d 3,480,640 -t int8 -c usa_lpd_cal.bin -e usa_pruned.etlt_b16_gpu0_int8.engine usa_pruned.etlt
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/models/LP/LPD$ cd ../../../deepstream-lpr-app/
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/deepstream-lpr-app$ vi lpd_us_config.txt

不顯示
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/deepstream-lpr-app$ ./deepstream-lpr-app 1 2 0 /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.mp4 output.264
有顯示
~/Data/DeepStream/deepstream_lpr_app/deepstream_lpr_app/deepstream-lpr-app$ ./deepstream-lpr-app 1 3 0 /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.mp4 output.264


重新訓練模型
使用 tlt-launcher
python3 -m venv --system-site-packages tlt
source ~/envs/tlt/bin/activate
(tlt) $ pip3 install nvidia-pyindex
(tlt) $ pip3 install nvidia-tlt
(tlt) $ cat ~/.tlt_mounts.json
{
    "Mounts": [
        {
            "source": "/home/username/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments",
            "destination": "/workspace/tlt-experiments"
        },
        {
            "source": "/home/username/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr",
            "destination": "/workspace/openalpr"
        }

    ]
}
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr$ git clone https://github.com/openalpr/benchmarks benchmarks
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr$ mkdir tlt-experiments
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr$ mkdir openalpr
重新訓練 LPD 模型
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr$ cd openalpr/
下載 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/release/tlt3.0/misc/dev_blog/LPDR/lpd/lpd_prepare_data.py
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ python3 lpd_prepare_data.py --input_dir ../benchmarks/endtoend/us --output_dir lpd --target_width 640 --target_height 480
下載 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/release/tlt3.0/misc/dev_blog/LPDR/lpd/SPECS_tfrecord.txt
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ tlt detectnet_v2 dataset_convert -d /workspace/openalpr/SPECS_tfrecord.txt -o /workspace/openalpr/lpd_tfrecord/lpd
下載 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/release/tlt3.0/misc/dev_blog/LPDR/lpd/SPECS_train.txt
下載預先訓練模型 (tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ ngc registry model download-version "nvidia/tlt_lpdnet:unpruned_v1.0"
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ mv tlt_lpdnet_vunpruned_v1.0/usa_unpruned.tlt .
訓練
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ tlt detectnet_v2 train -e /workspace/openalpr/SPECS_train.txt -r /workspace/openalpr/exp_unpruned -k nvidia_tlt
修剪
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ tlt detectnet_v2 prune -m /workspace/openalpr/exp_unpruned/weights/model.tlt -o /workspace/openalpr/exp_pruned/model_pruned_06.tlt -pth 0.6 -k nvidia_tlt
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ cp SPECS_train.txt SPECS_retrain.txt
修改 pretrained_model_file: "/workspace/openalpr/exp_pruned/model_pruned_06.tlt"
再次訓練
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ tlt detectnet_v2 train -e /workspace/openalpr/SPECS_retrain.txt -r /workspace/openalpr/exp_pruned -k nvidia_tlt
匯出
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ tlt detectnet_v2 export -m /workspace/openalpr/exp_unpruned/weights/model.tlt -o /workspace/openalpr/export/unpruned_model.etlt --cal_cache_file /workspace/openalpr/export/unpruned_calibration.bin -e /workspace/openalpr/SPECS_train.txt -k nvidia_tlt --cal_image_dir /workspace/openalpr/lpd/data/image --data_type int8 --batch_size 4 --batches 10 --engine_file /workspace/openalpr/export/unpruned_int8.trt
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ tlt detectnet_v2 export -m /workspace/openalpr/exp_pruned/weights/model.tlt -o /workspace/openalpr/export/pruned_model.etlt --cal_cache_file /workspace/openalpr/export/pruned_calibration.bin -e /workspace/openalpr/SPECS_train.txt -k nvidia_tlt --cal_image_dir /workspace/openalpr/lpd/data/image --data_type int8 --batch_size 4 --batches 10 --engine_file /workspace/openalpr/export/pruned_int8.trt
評估
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ tlt detectnet_v2 evaluate -m /workspace/openalpr/exp_unpruned/weights/model.tlt -k nvidia_tlt -e /workspace/openalpr/SPECS_train.txt
(tlt) mark@z390:~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/openalpr$ tlt detectnet_v2 evaluate -m /workspace/openalpr/exp_pruned/weights/model.tlt -k nvidia_tlt -e /workspace/openalpr/SPECS_train.txt
重新訓練 LPR 模型
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments$ mkdir lprnet
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments$ cd lprnet/
下載 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/release/tlt3.0/misc/dev_blog/LPDR/lpr/preprocess_openalpr_benchmark.py
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments/lprnet$ python3 preprocess_openalpr_benchmark.py --input_dir=../../benchmarks/endtoend/us --output_dir=../data/openalpr
下載 https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/release/tlt3.0/misc/dev_blog/LPDR/lpr/tutorial_spec.txt
修改 tutorial_spec.txt
augmentation_config {
    output_width: 96
    output_height: 48
    output_channel: 3
    keep_original_prob: 0.3
    transform_prob: 0.5
    rotate_degree: 5
}
成為
augmentation_config {
    output_width: 96
    output_height: 48
    output_channel: 3
    keep_original_prob: 0.3
    max_rotate_degree: 5
    rotate_prob: 0.5
    blur_prob: 0.5
}

下載預先訓練模型
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments/lprnet$ ngc registry model download-version "nvidia/tlt_lprnet:trainable_v1.0"
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments/lprnet$ cp tlt_lprnet_vtrainable_v1.0/us_lprnet_baseline18_trainable.tlt .
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments/lprnet$ mkdir specs
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments/lprnet$ cp tlt_lprnet_vtrainable_v1.0/us_lp_characters.txt specs/
訓練模型
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments/lprnet$ tlt lprnet train -e /workspace/tlt-experiments/lprnet/tutorial_spec.txt -r /workspace/tlt-experiments/lprnet/ -k nvidia_tlt -m /workspace/tlt-experiments/lprnet/us_lprnet_baseline18_trainable.tlt --use_amp
匯出模型
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments/lprnet$ tlt lprnet export -m /workspace/tlt-experiments/lprnet/weights/lprnet_epoch-24.tlt -k nvidia_tlt -e /workspace/tlt-experiments/lprnet/tutorial_spec.txt
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments/lprnet$ ls weights/lprnet_epoch-24.etlt
評估
(tlt) ~/Data/DeepStream/deepstream_lpr_app/tlt-lpdr/tlt-experiments/lprnet$ tlt lprnet evaluate -m /workspace/tlt-experiments/lprnet/weights/lprnet_epoch-24.tlt -k nvidia_tlt -e /workspace/tlt-experiments/lprnet/tutorial_spec.txt

2021年6月8日 星期二

Ubuntu 18.04 的 launcher 或 sidebar 消失

發生時機,當保護螢幕啟動,由遠端登入後發現
工作列不見了

壓 Alt+F2, 會出現一個對話框, 輸入 r
$ gnome-shell -r
$ gnome-shell --replace
$ busctl --user call org.gnome.Shell /org/gnome/Shell org.gnome.Shell Eval s 'Meta.restart("Restarting…")'

Restart GNOME Shell
$ killall -HUP gnome-shell

2021年6月7日 星期一

Ubuntu 18.04 重灌

下載 iso 檔,利用 rufus 寫入 usb
搜尋工具/Disks 確認硬碟位置
備份資料含檔案屬性
$ sudo cp -a /path/source/. /path/dest

gitea 備份和還原
/var/lib/gitea
$ sudo cp /etc/gitea/app.ini .
$ sudo -u git gitea dump -c /etc/gitea/app.ini
$ unzip gitea-dump-xxxx.zip
$ cd gitea-dump-xxxx
$ mv data/conf/app.ini /etc/gitea/conf/app.ini
$ mv data/* /var/lib/gitea/data/
$ mv log/* /var/lib/gitea/log/
$ mv repos/* /var/lib/gitea/repositories/
$ chown -R gitea:gitea /etc/gitea/conf/app.ini /var/lib/gitea
$ mysql --default-character-set=utf8mb4 -u$USER -p$PASS $DATABASE <gitea-db.sql
$ service gitea restart

mariadb backup and restore
/var/lib/mysql
$ sudo mysql
> SHOW DATABASES;
$ sudo mysqldump --all-databases > all.sql
$ sudo mysql --one-database db_name < all.sql

$ sudo cp -a /opt/tomcat /backup/opt/tomcat
$ sudo cp -a /opt/nvidia /backup/opt/nvidia
$ sudo cp -a /etc/nginx /backup/etc/nginx
$ sudo cp -a /etc/systemd /backup/etc/systemd
$ sudo cp -a /etc/udev/rules.d /backup/udev

$ sudo vi /etc/default/grub
$ sudo update-grub

nvidia driver 更新
$ sudo apt-get install gcc
$ sudo apt-get install make
下載 https://www.nvidia.com/download/driverResults.aspx/168347/en-us 驅動
$ chmod 755 NVIDIA-Linux-x86_64-460.32.03.run
$ sudo ./ NVIDIA-Linux-x86_64-460.32.03.run
但會說 nvidia-drm 已經啟動
所以要卸載 舊驅動
$ sudo apt-get --purge remove "*cublas*" "*cufft*" "*curand*" \
 "*cusolver*" "*cusparse*" "*npp*" "*nvjpeg*" "cuda*" "nsight*"
$ sudo apt-get purge 'nvidia*'
$ sudo apt-get autoremove
$ sudo reboot
$ sudo ./ NVIDIA-Linux-x86_64-460.32.03.run
使用 --no-opengl-files 參數,以免使用DeepStream時,只開啟一下黑幕,出現下列錯誤
cuGraphicsGLRegisterBuffer failed with error(304) gst_eglglessink_cuda_init texture = 1
$ sudo ./ NVIDIA-Linux-x86_64-460.32.03.run --no-opengl-files --dkms --no-drm
因為 ubuntu 還是用了一個驅動,所以不能直接更新驅動
但可以設定停用此驅動,小心回答問題,在執行一遍
Would you like ton register the kernel module sources with DKMS? 回答 Yes
Install NVIDIA’s 32-bit compatibility libraries? 回答 No
$ nvidia-smi

安裝 CUDA, 不要使用 deb(network), 好像在版本安裝上會出錯
$ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin
$ sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600
$ wget https://developer.download.nvidia.com/compute/cuda/11.1.1/local_installers/cuda-repo-ubuntu1804-11-1-local_11.1.1-455.32.00-1_amd64.deb
$ sudo dpkg -i cuda-repo-ubuntu1804-11-1-local_11.1.1-455.32.00-1_amd64.deb
$ sudo apt-key add /var/cuda-repo-ubuntu1804-11-1-local/7fa2af80.pub
$ sudo apt-get update
$ sudo apt-get -y install cuda
$ sudo apt-get -y install cuda-11-1
指定版本很重要,不然會裝最新的
$ nvidia-smi
Failed to initialize NVML: Driver/library version mismatch
重新開機即可
但顯示出來的 Driver 和 CUDA Version 都會改變

TensorRT 安裝
https://developer.nvidia.com/nvidia-tensorrt-download
由此進入選擇所需版本,並選擇 deb 版
由此選擇開啟文件
進入 TensorRT Installation Guide
https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-723/install-guide/index.html
跳至 4.1. Debian Installation
跟著步驟安裝

設定環境變數
PATH
LD_LIBRARY_PATH
pycuda 只能搭配 python 3.7
$ sudo apt-get install python3.7
$ sudo apt-get install python3.7-dev
$ python3.7 -m pip install 'pycuda>=2019.1.1'

Install Nvidia Docker
$ curl https://get.docker.com | sh && sudo systemctl --now enable docker
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
   && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
   && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
$ curl -s -L https://nvidia.github.io/nvidia-container-runtime/experimental/$distribution/nvidia-container-runtime.list | sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list
$ sudo apt-get update
$ sudo apt-get install -y nvidia-docker2
$ sudo systemctl restart docker
$ sudo docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
$ sudo groupadd docker
$ sudo usermod -a -G docker $USER
$ sudo reboot

Install TensorRT 7.2 OSS
參考 https://github.com/NVIDIA/TensorRT/tree/master
Install TensorRT 7.2 OSS
$ git clone -b master https://github.com/nvidia/TensorRT TensorRT_OSS-7.2.3.4
$ cd TensorRT_OSS-7.2.3.4/
$ git submodule update --init --recursive
$ cd /pathto/TensorRT-7.2.2.3/
$ export TRT_LIBPATH=`pwd`
$ cd /pathto/TensorRT_OSS-7.2.3.4/
$ ./docker/build.sh --file docker/ubuntu-18.04.Dockerfile --tag tensorrt-ubuntu-1804 --cuda 11.1
$ ./docker/launch.sh --tag tensorrt-ubuntu-1804 --gpus all
trtuser@c2936e108d43:/workspace$ cd $TRT_OSSPATH
trtuser@c2936e108d43:/workspace/TensorRT$ mkdir -p build && cd build
trtuser@c2936e108d43:/workspace/TensorRT/build$ cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out
trtuser@c2936e108d43:/workspace/TensorRT/build$ make -j$(nproc)
trtuser@c2936e108d43:/workspace/TensorRT/build$ exit
$ cd ..
$ mkdir backup
$ mv TensorRT-7.2.2.3/targets/x86_64-linux-gnu/lib/libnvinfer_plugin.so.7.2.2 backup
$ cp TensorRT_OSS-7.2.3.4/build/out/libnvinfer_plugin.so.7.2.3 TensorRT-7.2.2.3/targets/x86_64-linux-gnu/lib/

install xpra
https://github.com/Xpra-org/xpra/blob/master/docs/Build/Debian.md
$ git clone https://github.com/Xpra-org/xpra.git
$ sudo ./setup.py install
Exception: ERROR: cannot find a valid pkg-config entry for nvjpeg-11.4 using PKG_CONFIG_PATH=(empty)
$ vi setup.py
/if nvjpeg_ENABLED:
在底下不遠處有兩處 for v in ("11.4", "11.3"...):
皆改成 for v in ("11.1"): 即可
另外 ld: connot find -lcuda
$ sudo ln -s /usr/local/cuda-11.1/lib64/stubs/libcuda.so /usr/lib

pip 更新出錯
$ pip install --upgrade --no-cache-dir pip
$ python3 -m pip install --upgrade --no-cache-dir pip -i https://pypi.python.org/simple

python 之 pip install --upgrade pip 出錯

Exception:
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 215, in main
    status = self.run(options, args)
  File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 290, in run
    with self._build_session(options) as session:
  File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 69, in _build_session
    if options.cache_dir else None
  File "/usr/lib/python3.6/posixpath.py", line 80, in join
    a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not int

pip 更新出錯
$ pip install --upgrade --no-cache-dir pip
$ python3 -m pip install --upgrade --no-cache-dir pip -i https://pypi.python.org/simple

2021年6月3日 星期四

在 windows 10 利用 frp 執行 vnc

編輯 frps.ini
[common]
bind_port = 7000
token = password

編輯 frps.vbx
set ws=Wscript.CreateObject("Wscript.Shell")
ws.Run "C:\frp_0.31.2_windows_amd64\frps.exe -c C:\frp_0.31.2_windows_amd64\frps.ini", 0

gpedit.msc
本機電腦 原則/電腦設定/指令碼 - (啟動/關機)
啟動/內容
新增 ftps.vbs


編輯 frpc.ini
[common]
server_addr = remote.server.ip
server_port = 7000
token = password

[vnc]
type = tcp
local_ip = 127.0.0.1
local_port = 5800
remote_port = 5900
use_encryption = true
use_compression = true

編輯 frpc.vbx
set ws=Wscript.CreateObject("Wscript.Shell")
ws.Run "C:\frp_0.31.2_windows_amd64\frpc.exe -c C:\frp_0.31.2_windows_amd64\frpc.ini", 0

gpedit.msc
本機電腦 原則/電腦設定/指令碼 - (啟動/關機)
啟動/內容
新增 ftpc.vbs

2021年6月1日 星期二

Tesseract 在 Docker 上訓練 (二)

$ docker start t4cmp
$ docker exec -it t4cmp bash
# cd /home/workspace/tesseract
# TESSDATA_PREFIX=/root/tesstutorial/tesseract/tessdata
# vi src/training/tesstrain_util.sh
尋找 phase_E_extract_features()
在 run_command tesseract ${img_file} ${img_file%.*} 後面加入 --psm 7
在 ${box_config} ${config} & 去除 &
移除  jobs="$jobs $!" 和 wait $jobs
# vi src/training/language-specific.sh
加入 你要增加的語言

src/training/tesstrain.sh --fonts_dir /usr/share/fonts --lang plate \
--linedata_only --my_boxtiff_dir /home/tmp --noextract_font_properties \
--langdata_dir ../langdata --tessdata_dir ~/tesstutorial/tesseract/tessdata  \
--output_dir ~/tesstutorial/platetrain

lstmtraining --model_output ~/tesstutorial/impact_from_full/impact \
--continue_from ~/tesstutorial/impact_from_full/eng.lstm \
--traineddata ~/tesstutorial/tesseract/tessdata/eng.traineddata \
--train_listfile ~/tesstutorial/platetrain/plate.training_files.txt \
--max_iterations 400 

lstmtraining --stop_training \
--continue_from ~/tesstutorial/impact_from_full/impact_0.031000_2_400.checkpoint \
--traineddata ~/tesstutorial/tesseract/tessdata/best/eng.traineddata \
--model_output ~/tesstutorial/impact_from_full/plate.traineddata

刪除無法訓練的圖
for i in `find /tmp/plate-2021-04-13.u4C -name "*.tif"`
do
LSTMF=${i//.tif/.lstmf}
if [ ! -f "$LSTMF" ]; then
  LSTMF=`basename "$LSTMF"`
  TIF=${LSTMF//.lstmf/.tif}
  BOX=${LSTMF//.lstmf/.box}
  rm /home/tmp/$TIF /home/tmp/$BOX
fi
done