vLLM is a Python library that also contains pre-compiled C++ and CUDA (12.1) binaries.


  • OS: Linux

  • Python: 3.8 – 3.11

  • GPU: compute capability 7.0 or higher (e.g., V100, T4, RTX20xx, A100, L4, H100, etc.)

Install with pip#

You can install vLLM using pip:

$ # (Optional) Create a new conda environment.
$ conda create -n myenv python=3.8 -y
$ conda activate myenv

$ # Install vLLM with CUDA 12.1.
$ pip install vllm


As of now, vLLM’s binaries are compiled on CUDA 12.1 by default. However, you can install vLLM with CUDA 11.8 by running:

$ # Install vLLM with CUDA 11.8.
$ # Replace `cp310` with your Python version (e.g., `cp38`, `cp39`, `cp311`).
$ pip install

$ # Re-install PyTorch with CUDA 11.8.
$ pip uninstall torch -y
$ pip install torch --upgrade --index-url

Build from source#

You can also build and install vLLM from source:

$ git clone
$ cd vllm
$ pip install -e .  # This may take 5-10 minutes.


If you have trouble building vLLM, we recommend using the NVIDIA PyTorch Docker image.

$ # Use `--ipc=host` to make sure the shared memory is large enough.
$ docker run --gpus all -it --rm --ipc=host