網頁

2025年6月18日 星期三

Jetpack 6.0 更新 CUDA 12.2 至 12.6

本篇參考 安裝 Flux & ComfyUI 將其中的一些細節敘述清楚

參考 https://www.jetson-ai-lab.com/tutorial_comfyui_flux.html
參考 https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Installation.html#id10

$ sudo jtop
CUDA: 12.2.140
cuDNN: 8.9.4.25
TensorRT: 8.6.2.3

因為需要從 cuda-12.2 更新到 cuda-12.6
而在更新的過程中,會移除 Deepstream-7.0
所以需要自己安裝 Deepstream-7.1

記錄下目前的版本資料
$ dpkg -l>dekg_jp6.0.txt

$ wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh
$ chmod +x Miniconda3-latest-Linux-aarch64.sh
$ ./Miniconda3-latest-Linux-aarch64.sh
$ conda update conda
$ conda create -n comfyui python=3.10
$ conda init bash
$ cat .bashrc
$ conda activate comfyui
$ conda info --envs
$ conda deactivate
$ conda activate comfyui
$ conda list

$ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.1-1_all.deb
$ sudo dpkg -i cuda-keyring_1.1-1_all.deb
$ sudo apt-get update
$ sudo apt-get -y install cuda-toolkit-12-6 cuda-compat-12-6
$ update-alternatives --list cuda
$ update-alternatives --display cuda
$ sudo update-alternatives --config cuda
cuda-12.6

$ apt list -a cudnn
$ sudo apt-get install cudnn=9.10.2-1
$ apt list -a python3-libnvinfer
$ sudo apt-get install python3-libnvinfer=10.7.0.23+cuda12.6

The following packages will be REMOVED:
  deepstream-7.0 libnvparsers-dev nvidia-tensorrt-dev
$ sudo apt-get install python3-libnvinfer-dev=10.7.0.23-1+cuda12.6
$ sudo jtop
CUDA: 12.6.85
cuDNN: 9.10.2
TensorRT: 10.7.0.23
$ wget --content-disposition 'https://api.ngc.nvidia.com/v2/resources/org/nvidia/deepstream/7.1/files?redirect=true&path=deepstream-7.1_7.1.0-1_arm64.deb' -O deepstream-7.1_7.1.0-1_arm64.deb
$ sudo apt-get install ./deepstream-7.1_7.1.0-1_arm64.deb

$ export BNB_CUDA_VERSION=126
$ export LD_LIBRARY_PATH=/usr/local/cuda-12.6/lib64:$LD_LIBRARY_PATH

$ git clone https://github.com/timdettmers/bitsandbytes.git
$ cd bitsandbytes

$ pip uninstall numpy
$ pip install "numpy<2.0"
$ mkdir -p build
$ cd build
$ cmake .. -DCOMPUTE_BACKEND=cuda -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-12.6
$ make -j$(nproc)
$ cd ..
$ python setup.py install

$ pip install http://jetson.webredirect.org/jp6/cu124/+f/5fe/ee5f5d1a75229/torch-2.3.0-cp310-cp310-linux_aarch64.whl
$ pip install http://jetson.webredirect.org/jp6/cu124/+f/988/cb71323efff87/torchvision-0.18.0a0+6043bc2-cp310-cp310-linux_aarch64.whl
$ pip install http://jetson.webredirect.org/jp6/cu124/+f/0aa/a066463c02b4a/torchaudio-2.3.0+952ea74-cp310-cp310-linux_aarch64.whl

$ python3
>>> import bitsandbytes as bnb
>>> print(bnb.__version__)

$ git clone https://github.com/comfyanonymous/ComfyUI.git
$ cd ComfyUI
$ pip install -r requirements.txt
$ cd custom_nodes
$ git clone https://github.com/ltdrdata/ComfyUI-Manager.git
$ cd ..
$ mkdir workflows
$ cd workflows
$ wget https://www.jetson-ai-lab.com/assets/workflow_agx_orin_4steps.json
$ cd ..
到 https://huggingface.co/black-forest-labs/FLUX.1-schnell/tree/main
下載 flux1-schnell.safetensors 放到 models/unet
下載 ae.safetensors 放到 models/vae/FLUX1
到 https://huggingface.co/stabilityai/stable-diffusion-3-medium/tree/main/text_encoders
下載 clip_l.safetensors 和 t5xxl_fp8_e4m3fn.safetensors 放到 models/clip
$ python main.py --port=8080
在 http://127.0.0.1:8080/ 網頁,按 Queue Prompt, 等待...
Load workflow_agx_orin_4steps.json 時,需修改 "Load VAE" 的 vae_name 到 FLUX1/ae.safetensors

記錄下目前的版本資料
$ dpkg -l>dpkg_jp6.0_comfyui.txt

若需要將 cuda-12.6 還回 cuda-12.2
參考記錄下來的版本資料 dpkg_jp6.0.txt dpkg_jp6.0_comfyui.txt
參考 https://repo.download.nvidia.com/jetson 下載 package
移除不必要的 package
$ sudo apt-get remove --purge package
安裝 package 時,常出現相依性錯誤,所以有時要變更安裝順序
甚至最後重新安裝一遍,確保全部安裝
$ cd jetpack_6.0
$ ./download.sh
$ ./rollback.sh
$ ./install.sh

參考 https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-cuda/tags
$ xhost +
$ docker run -it --rm --net=host --runtime nvidia\
  -e DISPLAY=$DISPLAY \
  -v /tmp/.X11-unix/:/tmp/.X11-unix \
  nvcr.io/nvidia/l4t-cuda:11.4.19-runtime
$ docker run -it --rm --net=host --runtime nvidia\
  -e DISPLAY=$DISPLAY \
  -v /tmp/.X11-unix/:/tmp/.X11-unix \
  nvcr.io/nvidia/l4t-cuda:12.2.12-runtime
$ docker run -it --rm --net=host --runtime nvidia\
  -e DISPLAY=$DISPLAY \
  -v /tmp/.X11-unix/:/tmp/.X11-unix \
  nvcr.io/nvidia/l4t-cuda:12.6.11-runtime
docker: Error response from daemon: failed to create task for container: 
failed to create shim task: OCI runtime create failed: 
failed to create NVIDIA Container Runtime: failed to construct OCI spec modifier: 
requirements not met: unsatisfied condition: cuda>=12.6 (cuda=12.2): unknown.
因為 Jetpack 6.0 自帶 CUDA 12.2
使用的 docker image 不能比 CUDA 12.2 高 

$ docker run -it --rm --net=host --runtime nvidia\
  -e DISPLAY=$DISPLAY \
  -v /tmp/.X11-unix/:/tmp/.X11-unix \
  -w /opt/nvidia/deepstream/deepstream-7.0 \
  nvcr.io/nvidia/deepstream-l4t:7.0-samples-multiarch

因為 Jetpack 6.0 自帶 CUDA 12.2
使用的 docker image 不能比 CUDA 12.2 高 
所以也無法使用 Deepstream 7.1
$ docker run -it --rm --net=host --runtime nvidia\
  -e DISPLAY=$DISPLAY \
  -v /tmp/.X11-unix/:/tmp/.X11-unix \
  -w /opt/nvidia/deepstream/deepstream-7.1 \
  nvcr.io/nvidia/deepstream-l4t:7.1-samples-multiarch

2025年6月12日 星期四

安裝 Flux & ComfyUI

參考 https://www.jetson-ai-lab.com/tutorial_comfyui_flux.html

$ sudo jtop
CUDA: 12.2.140
cuDNN: 8.9.4.25
TensorRT: 8.6.2.3

因為需要從 cuda-12.2 更新到 cuda-12.6
而在更新的過程中,會移除 Deepstream-7.0
所以需要自己安裝 Deepstream-7.1

記錄下目前的版本資料
$ dpkg -l>dekg_jp6.0.txt

$ wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh
$ chmod +x Miniconda3-latest-Linux-aarch64.sh
$ ./Miniconda3-latest-Linux-aarch64.sh
$ conda update conda
$ conda create -n comfyui python=3.10
$ conda init bash
$ cat .bashrc
$ conda activate comfyui
$ conda info --envs
$ conda deactivate
$ conda activate comfyui
$ conda list

$ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.1-1_all.deb
$ sudo dpkg -i cuda-keyring_1.1-1_all.deb
$ sudo apt-get update
$ sudo apt-get -y install cuda-toolkit-12-6 cuda-compat-12-6
$ update-alternatives --list cuda
$ update-alternatives --display cuda
$ sudo update-alternatives --config cuda
cuda-12.6

$ sudo apt-get install cudnn=9.5.1-1
$ sudo apt-get install python3-libnvinfer=9.5.1-1
The following packages will be REMOVED:
  deepstream-7.0 libnvparsers-dev nvidia-tensorrt-dev
$ sudo apt-get install python3-libnvinfer-dev=10.6.0.26-1+cuda12.6
$ sudo jtop
CUDA: 12.6.77
cuDNN: 9.5.1
TensorRT: 10.6.0.26
$ wget --content-disposition 'https://api.ngc.nvidia.com/v2/resources/org/nvidia/deepstream/7.1/files?redirect=true&path=deepstream-7.1_7.1.0-1_arm64.deb' -O deepstream-7.1_7.1.0-1_arm64.deb
$ sudo apt-get install ./deepstream-7.1_7.1.0-1_arm64.deb

$ export BNB_CUDA_VERSION=126
$ export LD_LIBRARY_PATH=/usr/local/cuda-12.6/lib64:$LD_LIBRARY_PATH

$ git clone https://github.com/timdettmers/bitsandbytes.git
$ cd bitsandbytes

$ pip uninstall numpy
$ pip install "numpy<2.0"
$ mkdir -p build
$ cd build
$ cmake .. -DCOMPUTE_BACKEND=cuda -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-12.6
$ make -j$(nproc)
$ cd ..
$ python setup.py install

$ pip install http://jetson.webredirect.org/jp6/cu124/+f/5fe/ee5f5d1a75229/torch-2.3.0-cp310-cp310-linux_aarch64.whl
$ pip install http://jetson.webredirect.org/jp6/cu124/+f/988/cb71323efff87/torchvision-0.18.0a0+6043bc2-cp310-cp310-linux_aarch64.whl
$ pip install http://jetson.webredirect.org/jp6/cu124/+f/0aa/a066463c02b4a/torchaudio-2.3.0+952ea74-cp310-cp310-linux_aarch64.whl

$ python3
>>> import bitsandbytes as bnb
>>> print(bnb.__version__)

$ git clone https://github.com/comfyanonymous/ComfyUI.git
$ cd ComfyUI
$ pip install -r requirements.txt
$ cd custom_nodes
$ git clone https://github.com/ltdrdata/ComfyUI-Manager.git
$ cd ..
$ mkdir workflows
$ cd workflows
$ wget https://www.jetson-ai-lab.com/assets/workflow_agx_orin_4steps.json
$ cd ..
到 https://huggingface.co/black-forest-labs/FLUX.1-schnell/tree/main
下載 flux1-schnell.safetensors 放到 models/unet
下載 ae.safetensors 放到 models/vae/FLUX1
到 https://huggingface.co/stabilityai/stable-diffusion-3-medium/tree/main/text_encoders
下載 clip_l.safetensors 和 t5xxl_fp8_e4m3fn.safetensors 放到 models/clip
$ python main.py --port=8080
http://127.0.0.1:8080/
Load workflow_agx_orin_4steps.json 時,需修改 "Load VAE" 的 vae_name 到 FLUX1/ae.safetensors

記錄下目前的版本資料
$ dpkg -l>dekg_jp6.0_cuda-12.6.txt

若需要將 cuda-12.6 還回 cuda-12.2
$ cd jetpack_6.0
$ ./download.sh
$ ./install.sh
參考記錄下來的版本資料 dekg_jp6.0.txt dekg_jp6.0_cuda-12.6.txt
移除不必要的 package
$ sudo apt-get remove --purge package

參考 https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-cuda/tags
$ xhost +
$ docker run -it --rm --net=host --runtime nvidia\
  -e DISPLAY=$DISPLAY \
  -v /tmp/.X11-unix/:/tmp/.X11-unix \
  nvcr.io/nvidia/l4t-cuda:11.4.19-runtime
$ docker run -it --rm --net=host --runtime nvidia\
  -e DISPLAY=$DISPLAY \
  -v /tmp/.X11-unix/:/tmp/.X11-unix \
  nvcr.io/nvidia/l4t-cuda:12.2.12-runtime
$ docker run -it --rm --net=host --runtime nvidia\
  -e DISPLAY=$DISPLAY \
  -v /tmp/.X11-unix/:/tmp/.X11-unix \
  nvcr.io/nvidia/l4t-cuda:12.6.11-runtime
docker: Error response from daemon: failed to create task for container: 
failed to create shim task: OCI runtime create failed: 
failed to create NVIDIA Container Runtime: failed to construct OCI spec modifier: 
requirements not met: unsatisfied condition: cuda>=12.6 (cuda=12.2): unknown.


$ docker run -it --rm --net=host --runtime nvidia\
  -e DISPLAY=$DISPLAY \
  -v /tmp/.X11-unix/:/tmp/.X11-unix \
  -w /opt/nvidia/deepstream/deepstream-7.0 \
  nvcr.io/nvidia/deepstream-l4t:7.0-samples-multiarch

docker run -it --rm --net=host --runtime nvidia\
  -e DISPLAY=$DISPLAY \
  -v /tmp/.X11-unix/:/tmp/.X11-unix \
  -w /opt/nvidia/deepstream/deepstream-7.1 \
  nvcr.io/nvidia/deepstream-l4t:7.1-samples-multiarch

2025年4月15日 星期二

球型攝影機 onvif

參考 https://github.com/FalkTannhaeuser/python-onvif-zeep
參考 https://www.onvif.org/onvif/ver20/util/operationIndex.html

$ python -m venv --system-site-packages /mnt/Data/envs/onvif
$ source /mnt/Data/envs/onvif/bin/activate
$ pip install --upgrade onvif_zeep
$ git clone https://github.com/FalkTannhaeuser/python-onvif-zeep.git

$ onvif-cli devicemgmt GetHostname --user 'admin' --password 'sh22463458' --host '192.168.113.203' --port 80

查詢 ProfileToken
$ onvif-cli media GetProfiles --user 'admin' --password 'sh22463458' --host '192.168.113.203' --port 80 | grep -o "'token': '[^']*'" | awk -F': ' 'END {print $2}'
'MediaProfile00002'
$ onvif-cli ptz GotoPreset "{'ProfileToken':'MediaProfile00002', 'PresetToken':'9'}" --user 'admin' --password 'sh22463458' --host '192.168.113.203' --port 80
$ onvif-cli ptz GetPresets "{'ProfileToken':'MediaProfile00002'}" --user 'admin' --password 'sh22463458' --host '192.168.113.203' --port 80
$ onvif-cli ptz AbsoluteMove "{'ProfileToken':'MediaProfile00002', 'Position':{'PanTilt':{'x': -0.05, 'y': 0.6}, 'Zoom':0.5}}" --user 'admin' --password 'sh22463458' --host '192.168.113.203' --port 80

相對位置移動, 0:不移動 正:上,右,放大 負:下,左,縮小
左上
$ onvif-cli ptz RelativeMove "{'ProfileToken':'MediaProfile00002', 'Translation':{'PanTilt':{'x': 0.105, 'y': 0.22}, 'Zoom':0.3}}" --user 'admin' --password 'sh22463458' --host '192.168.113.203' --port 80
右下
$ onvif-cli ptz RelativeMove "{'ProfileToken':'MediaProfile00002', 'Translation':{'PanTilt':{'x': -0.115, 'y': -0.201}, 'Zoom':0.3}}" --user 'admin' --password 'sh22463458' --host '192.168.113.203' --port 80
左上
$ onvif-cli ptz RelativeMove "{'ProfileToken':'MediaProfile00002', 'Translation':{'PanTilt':{'x': 0.105, 'y': 0.21}, 'Zoom':0.15}}" --user 'admin' --password 'sh22463458' --host '192.168.113.203' --port 80
右下
$ onvif-cli ptz RelativeMove "{'ProfileToken':'MediaProfile00002', 'Translation':{'PanTilt':{'x': -0.105, 'y': -0.21}, 'Zoom':0.15}}" --user 'admin' --password 'sh22463458' --host '192.168.113.203' --port 80

2025年2月20日 星期四

open webui 使用網頁搜尋

https://developers.google.com/custom-search/v1/introduction?hl=zh-tw
點選 程式化搜尋引擎 (免費版) 使用者:取得金鑰

https://programmablesearchengine.google.com/controlpanel/all
點選 新增
輸入 搜尋引擎名稱
在 Paid Element API 金鑰 欄位,輸入之前取得的 金鑰
取得 搜尋引擎 ID

進入 open webui, 點選左下角使用者,選擇 設定
點選 管理員設定/網頁搜尋
開啟 啟用網頁搜尋
網頁搜尋引擎: google_pse
開啟 Full Context Mode
輸入 Google PSE API 金鑰
輸入 Google PSE 引擎 ID

網頁搜尋引擎: duckduckgo
可以直接使用

在 open webui 的傳送訊息欄位下方,可以點選 網頁搜尋

2025年2月19日 星期三

安裝 ollama open-webui nginx

參考 https://github.com/ollama/ollama
參考 https://hub.docker.com/r/ollama/ollama
參考 https://www.53ai.com/news/OpenSourceLLM/2024072585037.html

$ docker run -d --gpus=all -p 11434:11434 --name ollama \
  -v /mnt/Data/ollama/ollama_volume:/root/.ollama \
  ollama/ollama
$ docker exec -it ollama ollama run deepseek-r1
$ git clone https://github.com/ggerganov/llama.cpp.git
$ cd llama.cpp
$ cmake -B build
$ cmake --build build --config Release
$ pip install huggingface_hub

轉換 huggingface 上的 model, 成為 GGUF 格式
vi download.py 
from huggingface_hub import snapshot_download, login

login("hf_BqLATKBqbVzOWNBJcFMwHKzCJfu")

# 下载模型
snapshot_download(
    "taide/Llama-3.1-TAIDE-LX-8B-Chat",
    local_dir="taide_Llama-3.1-TAIDE-LX-8B-Chat",
    local_dir_use_symlinks=False,
    ignore_patterns=["*.gguf"]
)

$ vi convert_hf_to_gguf_update.py
在 models 中, 加入下行, 注意 TOKENIZER_TYPE 的選擇
    {"name": "taide_Llama-3.1-TAIDE-LX-8B-Chat", "tokt": TOKENIZER_TYPE.BPE, "repo": "https://huggingface.co/taide/Llama-3.1-TAIDE-LX-8B-Chat"},
    {"name": "yentinglin_Llama-3-Taiwan-8B-Instruct", "tokt": TOKENIZER_TYPE.BPE, "repo": "https://huggingface.co/yentinglin/Llama-3-Taiwan-8B-Instruct"},
$ python convert_hf_to_gguf_update.py hf_BqLATKBqbVzOWNBJcFMwHKzCJfu
$ python convert_hf_to_gguf.py taide_Llama-3.1-TAIDE-LX-8B-Chat --outtype f16 --outfile taide_Llama-3.1-TAIDE-LX-8B-Chat.fp16.gguf
$ llama.cpp/build/bin/llama-quantize taide_Llama-3.1-TAIDE-LX-8B-Chat.fp16.gguf Q4_K_M
$ mv ggml-model-Q4_K_M.gguf taide_Llama-3.1-TAIDE-LX-8B-Chat-Q4_K_M.gguf
$ vi Modelfile.taide-8b
FROM ./yentinglin_Llama-3-Taiwan-8B-Instruct.Q4_K_M.gguf
# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1
# set the system message
SYSTEM """
我是一個萬事通
"""

$ docker exec -it ollama /bin/bash
# cd /root/.ollama
# ollama create taide-8b -f ./Modelfile.taide-8b
# ollama list
# ollama show taide-8b
# ollama rm taide-8b
# ollama ps
# ollama run taide-8b
>>> /bye
# OLLAMA_HOST=127.0.0.1:11434 ollama serve
$ curl http://localhost:11434/api/generate -d '{
  "model": "yentinglin-8b", 
  "prompt": "建議適合ai的程式語言"
}'
$ curl http://localhost:11434/api/generate -d '{
  "model": "yentinglin-8b", 
  "prompt": "建議適合ai的程式語言",
  "stream", false
}'
$ curl http://localhost:11434/api/chat -d '{
  "model": "yentinglin-8b", 
  "messages": [
    {"role": "user", "content": "建議適合ai的程式語言"}
  ]
}'
$ curl http://localhost:11434/v1/chat/completions \
    -H "Content-Type: application/json" \
    -d '{
        "model": "yentinglin-8b",
        "messages": [
            {
                "role": "system",
                "content": "你是一個萬事通"
            },
            {
                "role": "user",
                "content": "眼睛酸痛,怎麼辦?"
            }
        ]
    }'

$ docker logs ollama

$ python ../llama.cpp/convert_hf_to_gguf.py yentinglin_Llama-3-Taiwan-8B-Instruct --outtype f16 --outfile yentinglin_Llama-3-Taiwan-8B-Instruct.fp16.gguf
$ llama.cpp/build/bin/llama-quantize yentinglin_Llama-3-Taiwan-8B-Instruct.fp16.gguf Q4_K_M


建議適合ai的程式語言

$ docker run -d -p 3000:8080 --gpus all \
  --add-host=host.docker.internal:host-gateway \
  -v /mnt/Data/ollama/open-webui_volume:/app/backend/data \
  --name open-webui \
  --restart always \
  ghcr.io/open-webui/open-webui:cuda
  
Firefox Web Browser 輸入 http://localhost:3000
出現 This address is restricted 錯誤
進入 Firefox Web Browser 設定
網址列輸入 about:config, 按 "Accept the Risk and Continue" 按鈕
在收尋欄輸入 network.security.ports.banned.override, 點選 "String", 按 +
輸入 port 3000, 按 V
重新載入 http://localhost:3000

chrome 設定
chrome://flags/#unsafely-treat-insecure-origin-as-secure
輸入網址 http://localhost:3000

安裝 nginx
參考 https://docs.openwebui.com/tutorials/https-nginx/
參考 https://yingrenn.blogspot.com/2020/07/ssl-nginx.html
vi nginx.conf
server {
    listen 443 ssl;
    server_name  www.domain.com.tw;
    ssl_certificate /etc/nginx/conf/Certs/server.pem;
    ssl_certificate_key /etc/nginx/conf/Certs/server.key;
    ssl_trusted_certificate /etc/nginx/conf/Certs/caChain.crt;
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_session_timeout 5m;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:HIGH:!aNULL:!MD5:!RC4:!DHE;
    ssl_prefer_server_ciphers on;
    
    location / {
        proxy_set_header HOST $host;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass http://host.docker.internal:3000;
        
        # Add WebSocket support (Necessary for version 0.5.0 and up)
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        # (Optional) Disable proxy buffering for better streaming response from models
        proxy_buffering off;
    }
}
server {
     listen 80;
     server_name www.domain.com.tw;
     return 301 https://$host$request_uri; 
}

docker run -itd --name nginx \
  -p 80:80 -p 443:443 \
  --add-host=host.docker.internal:host-gateway \
  -v /mnt/Data/ollama/nginx/conf.d/nginx.conf:/etc/nginx/conf.d/nginx.conf \
  -v /mnt/Data/ollama/nginx/conf:/etc/nginx/conf \
  -m 100m library/nginx:latest

https://www.domain.com.tw

2025年2月4日 星期二

ASR 語音辨識

參考 https://speaches-ai.github.io/speaches/
參考 https://github.com/speaches-ai/speaches/tree/master

curl --silent --remote-name https://raw.githubusercontent.com/speaches-ai/speaches/master/compose.yaml
curl --silent --remote-name https://raw.githubusercontent.com/speaches-ai/speaches/master/compose.cuda.yaml
curl --silent --remote-name https://raw.githubusercontent.com/speaches-ai/speaches/master/compose.cuda-cdi.yaml
export COMPOSE_FILE=compose.cuda-cdi.yaml

安裝使用 CUDA with CDI(Container Device Interface) feature enabled
sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml

修改 compose.cuda-cdi.yaml, 加入 command, 並修改 devices
services:
  speaches:
    command: ["uvicorn", "--factory", "speaches.main:create_app", "--ws-ping-interval", "1000", "--ws-ping-timeout", "1200"]
          # WARN: requires Docker Compose 2.24.2
          # https://docs.docker.com/reference/compose-file/merge/#replace-value
          devices:
            - driver: nvidia
              device_ids: ['0']
              capabilities:
                - gpu

伺服器端 log 出現下列錯誤
websockets.exceptions.ConnectionClosedError: sent 1011 (internal error) keepalive ping timeout; no close frame received
compose.cuda-cdi.yaml 的 command 加入 ws-ping-interval ws-ping-timeout

$ docker compose up --detach
$ docker compose stop
$ docker compose rm
$ docker compose logs
$ docker inspect speaches
$ docker cp speaches:/home/ubuntu/speaches/speaches/config.py .

$ docker compose exec speaches sh
$ docker compose run -d speaches uvicorn --factory speaches.main:create_app --ws-ping-interval=10 --ws-ping-timeout=12

伺服器端 log 出現下列錯誤
INFO:speaches.routers.stt:audio_receiver:262:Not enough speech in the last 30.0 seconds.
$ vi speaches/src/speaches/config.py
inactivity_window_seconds: float = 1000.0

說明文件
http://localhost:8000/docs
http://localhost:8000/redoc

$ curl -X POST -F "file=@/mnt/Data/Whisper/examples/《大隋说书人 》 01.mp3" -F "prompt=歡迎收聽第一集處女觀大隨雍洲且墨城深秋夜" -F "language=zh" http://localhost:8000/v1/audio/transcriptions

ubuntu Settings/Sound:Input 選擇正確輸入源,螢幕上的音量可以顯示輸入音量
列出可用音源輸入
$ arecord -l
**** List of CAPTURE Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALCS1200A Analog [ALCS1200A Analog]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 2: ALCS1200A Alt Analog [ALCS1200A Alt Analog]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

使用 card 0
$ ffmpeg -f alsa -i hw:0 -acodec libmp3lame -b:a 128k -abr 1 aaa.mp3
按q, 停止

因為使用 -i hw:0, 無法錄到聲音, 改用 -i default
$ arecord -L
default
    Playback/recording through the PulseAudio sound server

使用預設音源輸入
$ ffmpeg -f alsa -i default -acodec libmp3lame -b:a 128k -abr 1 aaa.mp3
按q, 停止
$ ffmpeg -loglevel quiet -f alsa -i default -ac 1 -ar 16000 -f s16le aaa.wav
按q, 停止
去除 mp3 的 metadata
$ ffmpeg -hide_banner -i '/mnt/Data/Whisper/examples/《大隋说书人 》 01.mp3' -c:v copy -c:a copy -map_metadata -1 test.mp3
轉 mp3 到 pcm
$ ffmpeg -i test.mp3 -f s16le -ar 16000 -ac 1 test.pcm

$ cat test.pcm | pv -qL 32000 | websocat --no-close --binary 'ws://localhost:8000/v1/audio/transcriptions?language=zh'

客戶端出現下列錯誤
Closing WebSocket connection due to ping timeout
命令中加入 --ping-timeout 和 --ping-interval

$ cat test.pcm | pv -qL 32000 | websocat --no-close --binary --ping-timeout 12000 --ping-interval 10000 'ws://localhost:8000/v1/audio/transcriptions?language=zh'

由麥克風輸入,產生 pcm 檔
$ ffmpeg -f alsa -ar 16000 -i default -ac 1 -f s16le aaa.pcm
轉成 mp3
$ ffmpeg -f s16le -ar 16000 -ac 1 -i aaa.pcm -codec:a libmp3lame aaa.mp3
$ cat aaa.pcm | pv -aL 32000 | websocat --no-close --binary --ping-timeout 12000 --ping-interval 10000 'ws://localhost:8000/v1/audio/transcriptions?language=zh'

測試 CLI
export OPENAI_BASE_URL=http://localhost:8000/v1/
export OPENAI_API_KEY="cant-be-empty"
openai api audio.transcriptions.create -m Systran/faster-whisper-large-v3 -f '/mnt/Data/Whisper/examples/《大隋说书人 》 01.mp3' --response-format text


申請 OPENAI_API_KEY, 並測試 
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

curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer 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" \
  -d '{
    "model": "gpt-4o-mini",
    "store": true,
    "messages": [
      {"role": "user", "content": "write a haiku about ai"}
    ]
  }'

2025年1月17日 星期五

websocat 測試

參考 https://github.com/vi/websocat
安裝檔案選擇
https://github.com/vi/websocat/releases
我的 ubuntu 選擇 websocat.x86_64-unknown-linux-musl

$ wget https://github.com/vi/websocat/releases/download/v1.14.0/websocat.x86_64-unknown-linux-musl -O websocat
$ chmod +x websocat
$ sudo mv websocat /usr/local/bin/

測試是否能執行,查詢版本
$ websocat --version

連線到公用的 echo 伺服器
$ websocat ws://ws.vi-server.org/mirror
123
123
ABC
ABC

使用 docker 連線到公用的 echo 伺服器
$ docker run --rm -ti ghcr.io/vi/websocat:nightly wss://ws.vi-server.org/mirror
123
123
ABC
ABC

開啟接收伺服器
A$ websocat -s 1234
Listening on ws://127.0.0.1:1234/
ABC
123

連線伺服器,傳送資料
B$ websocat ws://127.0.0.1:1234/
ABC
123

安裝 chrome
$ sudo snap install chromium

啟動 chrome, 有遠端除錯模式
$ chromium --remote-debugging-port=9222&

用另一終端執行命令,在 chrome 中開啟分頁
$ curl -X PUT http://127.0.0.1:9222/json/new | grep webSocketDebuggerUrl | cut -d'"' -f4 | head -1
ws://127.0.0.1:9222/devtools/page/DC8E8EF5B872E141E8F60FDB4764F648

注意上個命令返回的網址,修改並執行下列命令
$ echo 'Page.navigate {"url":"https://example.com"}' | websocat -n1 --jsonrpc --jsonrpc-omit-jsonrpc ws://127.0.0.1:9222/devtools/page/DC8E8EF5B872E141E8F60FDB4764F648

在 chrome 的分頁網址上連到 https://example.com/

將 WebSocket 連線轉成 TCP
$ websocat --oneshot -b tcp-l:127.0.0.1:1234 ws://ws.vi-server.org/mirror&
$ nc 127.0.0.1 1234

使用 TCP 和 WebSocket 轉換,測試 ssh
$ websocat --oneshot -b ws-l:127.0.0.1:1234 tcp:127.0.0.1:22&
$ websocat --oneshot -b tcp-l:127.0.0.1:1236 ws://127.0.0.1:1234/&
$ nc 127.0.0.1 1236
SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.9
aaaaasdf
Invalid SSH identification string.