Installation with ROCm#

vLLM supports AMD GPUs with ROCm 6.2.

Requirements#

  • OS: Linux

  • Python: 3.9 – 3.12

  • GPU: MI200s (gfx90a), MI300 (gfx942), Radeon RX 7900 series (gfx1100)

  • ROCm 6.2

Note: PyTorch 2.5+/ROCm6.2 dropped the support for python 3.8.

Installation options:

  1. Build from source with docker

  2. Build from source

Option 2: Build from source#

  1. Install prerequisites (skip if you are already in an environment/docker with the following installed):

For installing PyTorch, you can start from a fresh docker image, e.g, rocm/pytorch:rocm6.2_ubuntu20.04_py3.9_pytorch_release_2.3.0, rocm/pytorch-nightly.

Alternatively, you can install PyTorch using PyTorch wheels. You can check PyTorch installation guide in PyTorch Getting Started

  1. Install Triton flash attention for ROCm

Install ROCm’s Triton flash attention (the default triton-mlir branch) following the instructions from ROCm/triton

$ python3 -m pip install ninja cmake wheel pybind11
$ pip uninstall -y triton
$ git clone https://github.com/OpenAI/triton.git
$ cd triton
$ git checkout e192dba
$ cd python
$ pip3 install .
$ cd ../..

Note

  • If you see HTTP issue related to downloading packages during building triton, please try again as the HTTP error is intermittent.

  1. Optionally, if you choose to use CK flash attention, you can install flash attention for ROCm

Install ROCm’s flash attention (v2.5.9.post1) following the instructions from ROCm/flash-attention Alternatively, wheels intended for vLLM use can be accessed under the releases.

For example, for ROCm 6.2, suppose your gfx arch is gfx90a. Note to get your gfx architecture, run rocminfo |grep gfx.

$ git clone https://github.com/ROCm/flash-attention.git
$ cd flash-attention
$ git checkout 3cea2fb
$ git submodule update --init
$ GPU_ARCHS="gfx90a" python3 setup.py install
$ cd ..

Note

  • You might need to downgrade the “ninja” version to 1.10 it is not used when compiling flash-attention-2 (e.g. pip install ninja==1.10.2.4)

  1. Build vLLM.

    For example, vLLM on ROCM 6.2 can be built with the following steps:

    $ pip install --upgrade pip
    
    $ # Install PyTorch
    $ pip uninstall torch -y
    $ pip install --no-cache-dir --pre torch==2.6.0.dev20240918 --index-url https://download.pytorch.org/whl/nightly/rocm6.2
    
    $ # Build & install AMD SMI
    $ pip install /opt/rocm/share/amd_smi
    
    $ # Install dependencies
    $ pip install --upgrade numba scipy huggingface-hub[cli]
    $ pip install "numpy<2"
    $ pip install -r requirements-rocm.txt
    
    $ # Build vLLM for MI210/MI250/MI300.
    $ export PYTORCH_ROCM_ARCH="gfx90a;gfx942"
    $ python3 setup.py develop
    

    This may take 5-10 minutes. Currently, pip install . does not work for ROCm installation.

Tip

  • Triton flash attention is used by default. For benchmarking purposes, it is recommended to run a warm up step before collecting perf numbers.

  • Triton flash attention does not currently support sliding window attention. If using half precision, please use CK flash-attention for sliding window support.

  • To use CK flash-attention or PyTorch naive attention, please use this flag export VLLM_USE_TRITON_FLASH_ATTN=0 to turn off triton flash attention.

  • The ROCm version of PyTorch, ideally, should match the ROCm driver version.

Tip