Open WebUI#
Install the Docker
Start the vLLM server with the supported chat completion model, e.g.
vllm serve qwen/Qwen1.5-0.5B-Chat
Start the Open WebUI docker container (replace the vllm serve host and vllm serve port):
docker run -d -p 3000:8080 \
--name open-webui \
-v open-webui:/app/backend/data \
-e OPENAI_API_BASE_URL=http://<vllm serve host>:<vllm serve port>/v1 \
--restart always \
ghcr.io/open-webui/open-webui:main
Open it in the browser: http://open-webui-host:3000/
On the top of the web page, you can see the model qwen/Qwen1.5-0.5B-Chat
.
