網頁

2025年2月20日 星期四

open webui 使用網頁搜尋

https://developers.google.com/custom-search/v1/introduction?hl=zh-tw
點選 程式化搜尋引擎 (免費版) 使用者:取得金鑰

https://programmablesearchengine.google.com/controlpanel/all
點選 新增
輸入 搜尋引擎名稱
在 Paid Element API 金鑰 欄位,輸入之前取得的 金鑰
取得 搜尋引擎 ID

進入 open webui, 點選左下角使用者,選擇 設定
點選 管理員設定/網頁搜尋
開啟 啟用網頁搜尋
網頁搜尋引擎: google_pse
開啟 Full Context Mode
輸入 Google PSE API 金鑰
輸入 Google PSE 引擎 ID

網頁搜尋引擎: duckduckgo
可以直接使用

在 open webui 的傳送訊息欄位下方,可以點選 網頁搜尋

2025年2月19日 星期三

安裝 ollama open-webui nginx

參考 https://github.com/ollama/ollama
參考 https://hub.docker.com/r/ollama/ollama
參考 https://www.53ai.com/news/OpenSourceLLM/2024072585037.html

$ docker run -d --gpus=all -p 11434:11434 --name ollama \
  -v /mnt/Data/ollama/ollama_volume:/root/.ollama \
  ollama/ollama
$ docker exec -it ollama ollama run deepseek-r1
$ git clone https://github.com/ggerganov/llama.cpp.git
$ cd llama.cpp
$ cmake -B build
$ cmake --build build --config Release
$ pip install huggingface_hub

轉換 huggingface 上的 model, 成為 GGUF 格式
vi download.py 
from huggingface_hub import snapshot_download, login

login("hf_BqLATKBqbVzOWNBJcFMwHKzCJfu")

# 下载模型
snapshot_download(
    "taide/Llama-3.1-TAIDE-LX-8B-Chat",
    local_dir="taide_Llama-3.1-TAIDE-LX-8B-Chat",
    local_dir_use_symlinks=False,
    ignore_patterns=["*.gguf"]
)

$ vi convert_hf_to_gguf_update.py
在 models 中, 加入下行, 注意 TOKENIZER_TYPE 的選擇
    {"name": "taide_Llama-3.1-TAIDE-LX-8B-Chat", "tokt": TOKENIZER_TYPE.BPE, "repo": "https://huggingface.co/taide/Llama-3.1-TAIDE-LX-8B-Chat"},
    {"name": "yentinglin_Llama-3-Taiwan-8B-Instruct", "tokt": TOKENIZER_TYPE.BPE, "repo": "https://huggingface.co/yentinglin/Llama-3-Taiwan-8B-Instruct"},
$ python convert_hf_to_gguf_update.py hf_BqLATKBqbVzOWNBJcFMwHKzCJfu
$ python convert_hf_to_gguf.py taide_Llama-3.1-TAIDE-LX-8B-Chat --outtype f16 --outfile taide_Llama-3.1-TAIDE-LX-8B-Chat.fp16.gguf
$ llama.cpp/build/bin/llama-quantize taide_Llama-3.1-TAIDE-LX-8B-Chat.fp16.gguf Q4_K_M
$ mv ggml-model-Q4_K_M.gguf taide_Llama-3.1-TAIDE-LX-8B-Chat-Q4_K_M.gguf
$ vi Modelfile.taide-8b
FROM ./yentinglin_Llama-3-Taiwan-8B-Instruct.Q4_K_M.gguf
# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1
# set the system message
SYSTEM """
我是一個萬事通
"""

$ docker exec -it ollama /bin/bash
# cd /root/.ollama
# ollama create taide-8b -f ./Modelfile.taide-8b
# ollama list
# ollama show taide-8b
# ollama rm taide-8b
# ollama ps
# ollama run taide-8b
>>> /bye
# OLLAMA_HOST=127.0.0.1:11434 ollama serve
$ curl http://localhost:11434/api/generate -d '{
  "model": "yentinglin-8b", 
  "prompt": "建議適合ai的程式語言"
}'
$ curl http://localhost:11434/api/generate -d '{
  "model": "yentinglin-8b", 
  "prompt": "建議適合ai的程式語言",
  "stream", false
}'
$ curl http://localhost:11434/api/chat -d '{
  "model": "yentinglin-8b", 
  "messages": [
    {"role": "user", "content": "建議適合ai的程式語言"}
  ]
}'
$ curl http://localhost:11434/v1/chat/completions \
    -H "Content-Type: application/json" \
    -d '{
        "model": "yentinglin-8b",
        "messages": [
            {
                "role": "system",
                "content": "你是一個萬事通"
            },
            {
                "role": "user",
                "content": "眼睛酸痛,怎麼辦?"
            }
        ]
    }'

$ docker logs ollama

$ python ../llama.cpp/convert_hf_to_gguf.py yentinglin_Llama-3-Taiwan-8B-Instruct --outtype f16 --outfile yentinglin_Llama-3-Taiwan-8B-Instruct.fp16.gguf
$ llama.cpp/build/bin/llama-quantize yentinglin_Llama-3-Taiwan-8B-Instruct.fp16.gguf Q4_K_M


建議適合ai的程式語言

$ docker run -d -p 3000:8080 --gpus all \
  --add-host=host.docker.internal:host-gateway \
  -v /mnt/Data/ollama/open-webui_volume:/app/backend/data \
  --name open-webui \
  --restart always \
  ghcr.io/open-webui/open-webui:cuda
  
Firefox Web Browser 輸入 http://localhost:3000
出現 This address is restricted 錯誤
進入 Firefox Web Browser 設定
網址列輸入 about:config, 按 "Accept the Risk and Continue" 按鈕
在收尋欄輸入 network.security.ports.banned.override, 點選 "String", 按 +
輸入 port 3000, 按 V
重新載入 http://localhost:3000

chrome 設定
chrome://flags/#unsafely-treat-insecure-origin-as-secure
輸入網址 http://localhost:3000

安裝 nginx
參考 https://docs.openwebui.com/tutorials/https-nginx/
參考 https://yingrenn.blogspot.com/2020/07/ssl-nginx.html
vi nginx.conf
server {
    listen 443 ssl;
    server_name  www.domain.com.tw;
    ssl_certificate /etc/nginx/conf/Certs/server.pem;
    ssl_certificate_key /etc/nginx/conf/Certs/server.key;
    ssl_trusted_certificate /etc/nginx/conf/Certs/caChain.crt;
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_session_timeout 5m;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:HIGH:!aNULL:!MD5:!RC4:!DHE;
    ssl_prefer_server_ciphers on;
    
    location / {
        proxy_set_header HOST $host;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass http://host.docker.internal:3000;
        
        # Add WebSocket support (Necessary for version 0.5.0 and up)
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        # (Optional) Disable proxy buffering for better streaming response from models
        proxy_buffering off;
    }
}
server {
     listen 80;
     server_name www.domain.com.tw;
     return 301 https://$host$request_uri; 
}

docker run -itd --name nginx \
  -p 80:80 -p 443:443 \
  --add-host=host.docker.internal:host-gateway \
  -v /mnt/Data/ollama/nginx/conf.d/nginx.conf:/etc/nginx/conf.d/nginx.conf \
  -v /mnt/Data/ollama/nginx/conf:/etc/nginx/conf \
  -m 100m library/nginx:latest

https://www.domain.com.tw

2025年2月4日 星期二

ASR 語音辨識

參考 https://speaches-ai.github.io/speaches/
參考 https://github.com/speaches-ai/speaches/tree/master

curl --silent --remote-name https://raw.githubusercontent.com/speaches-ai/speaches/master/compose.yaml
curl --silent --remote-name https://raw.githubusercontent.com/speaches-ai/speaches/master/compose.cuda.yaml
curl --silent --remote-name https://raw.githubusercontent.com/speaches-ai/speaches/master/compose.cuda-cdi.yaml
export COMPOSE_FILE=compose.cuda-cdi.yaml

安裝使用 CUDA with CDI(Container Device Interface) feature enabled
sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml

修改 compose.cuda-cdi.yaml, 加入 command, 並修改 devices
services:
  speaches:
    command: ["uvicorn", "--factory", "speaches.main:create_app", "--ws-ping-interval", "1000", "--ws-ping-timeout", "1200"]
          # WARN: requires Docker Compose 2.24.2
          # https://docs.docker.com/reference/compose-file/merge/#replace-value
          devices:
            - driver: nvidia
              device_ids: ['0']
              capabilities:
                - gpu

伺服器端 log 出現下列錯誤
websockets.exceptions.ConnectionClosedError: sent 1011 (internal error) keepalive ping timeout; no close frame received
compose.cuda-cdi.yaml 的 command 加入 ws-ping-interval ws-ping-timeout

$ docker compose up --detach
$ docker compose stop
$ docker compose rm
$ docker compose logs
$ docker inspect speaches
$ docker cp speaches:/home/ubuntu/speaches/speaches/config.py .

$ docker compose exec speaches sh
$ docker compose run -d speaches uvicorn --factory speaches.main:create_app --ws-ping-interval=10 --ws-ping-timeout=12

伺服器端 log 出現下列錯誤
INFO:speaches.routers.stt:audio_receiver:262:Not enough speech in the last 30.0 seconds.
$ vi speaches/src/speaches/config.py
inactivity_window_seconds: float = 1000.0

說明文件
http://localhost:8000/docs
http://localhost:8000/redoc

$ curl -X POST -F "file=@/mnt/Data/Whisper/examples/《大隋说书人 》 01.mp3" -F "prompt=歡迎收聽第一集處女觀大隨雍洲且墨城深秋夜" -F "language=zh" http://localhost:8000/v1/audio/transcriptions

ubuntu Settings/Sound:Input 選擇正確輸入源,螢幕上的音量可以顯示輸入音量
列出可用音源輸入
$ arecord -l
**** List of CAPTURE Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALCS1200A Analog [ALCS1200A Analog]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 2: ALCS1200A Alt Analog [ALCS1200A Alt Analog]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

使用 card 0
$ ffmpeg -f alsa -i hw:0 -acodec libmp3lame -b:a 128k -abr 1 aaa.mp3
按q, 停止

因為使用 -i hw:0, 無法錄到聲音, 改用 -i default
$ arecord -L
default
    Playback/recording through the PulseAudio sound server

使用預設音源輸入
$ ffmpeg -f alsa -i default -acodec libmp3lame -b:a 128k -abr 1 aaa.mp3
按q, 停止
$ ffmpeg -loglevel quiet -f alsa -i default -ac 1 -ar 16000 -f s16le aaa.wav
按q, 停止
去除 mp3 的 metadata
$ ffmpeg -hide_banner -i '/mnt/Data/Whisper/examples/《大隋说书人 》 01.mp3' -c:v copy -c:a copy -map_metadata -1 test.mp3
轉 mp3 到 pcm
$ ffmpeg -i test.mp3 -f s16le -ar 16000 -ac 1 test.pcm

$ cat test.pcm | pv -qL 32000 | websocat --no-close --binary 'ws://localhost:8000/v1/audio/transcriptions?language=zh'

客戶端出現下列錯誤
Closing WebSocket connection due to ping timeout
命令中加入 --ping-timeout 和 --ping-interval

$ cat test.pcm | pv -qL 32000 | websocat --no-close --binary --ping-timeout 12000 --ping-interval 10000 'ws://localhost:8000/v1/audio/transcriptions?language=zh'

由麥克風輸入,產生 pcm 檔
$ ffmpeg -f alsa -ar 16000 -i default -ac 1 -f s16le aaa.pcm
轉成 mp3
$ ffmpeg -f s16le -ar 16000 -ac 1 -i aaa.pcm -codec:a libmp3lame aaa.mp3
$ cat aaa.pcm | pv -aL 32000 | websocat --no-close --binary --ping-timeout 12000 --ping-interval 10000 'ws://localhost:8000/v1/audio/transcriptions?language=zh'

測試 CLI
export OPENAI_BASE_URL=http://localhost:8000/v1/
export OPENAI_API_KEY="cant-be-empty"
openai api audio.transcriptions.create -m Systran/faster-whisper-large-v3 -f '/mnt/Data/Whisper/examples/《大隋说书人 》 01.mp3' --response-format text


申請 OPENAI_API_KEY, 並測試 
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

curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer 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" \
  -d '{
    "model": "gpt-4o-mini",
    "store": true,
    "messages": [
      {"role": "user", "content": "write a haiku about ai"}
    ]
  }'