Supported Models#

vLLM supports a variety of generative Transformer models in HuggingFace Transformers. The following is the list of model architectures that are currently supported by vLLM. Alongside each architecture, we include some popular models that use it.

Architecture

Models

Example HuggingFace Models

LoRA

AquilaForCausalLM

Aquila

BAAI/Aquila-7B, BAAI/AquilaChat-7B, etc.

✅︎

BaiChuanForCausalLM

Baichuan

baichuan-inc/Baichuan2-13B-Chat, baichuan-inc/Baichuan-7B, etc.

✅︎

ChatGLMModel

ChatGLM

THUDM/chatglm2-6b, THUDM/chatglm3-6b, etc.

✅︎

CohereForCausalLM

Command-R

CohereForAI/c4ai-command-r-v01, etc.

DbrxForCausalLM

DBRX

databricks/dbrx-base, databricks/dbrx-instruct, etc.

DeciLMForCausalLM

DeciLM

Deci/DeciLM-7B, Deci/DeciLM-7B-instruct, etc.

BloomForCausalLM

BLOOM, BLOOMZ, BLOOMChat

bigscience/bloom, bigscience/bloomz, etc.

FalconForCausalLM

Falcon

tiiuae/falcon-7b, tiiuae/falcon-40b, tiiuae/falcon-rw-7b, etc.

GemmaForCausalLM

Gemma

google/gemma-2b, google/gemma-7b, etc.

✅︎

GPT2LMHeadModel

GPT-2

gpt2, gpt2-xl, etc.

GPTBigCodeForCausalLM

StarCoder, SantaCoder, WizardCoder

bigcode/starcoder, bigcode/gpt_bigcode-santacoder, WizardLM/WizardCoder-15B-V1.0, etc.

GPTJForCausalLM

GPT-J

EleutherAI/gpt-j-6b, nomic-ai/gpt4all-j, etc.

GPTNeoXForCausalLM

GPT-NeoX, Pythia, OpenAssistant, Dolly V2, StableLM

EleutherAI/gpt-neox-20b, EleutherAI/pythia-12b, OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5, databricks/dolly-v2-12b, stabilityai/stablelm-tuned-alpha-7b, etc.

InternLMForCausalLM

InternLM

internlm/internlm-7b, internlm/internlm-chat-7b, etc.

✅︎

InternLM2ForCausalLM

InternLM2

internlm/internlm2-7b, internlm/internlm2-chat-7b, etc.

JAISLMHeadModel

Jais

core42/jais-13b, core42/jais-13b-chat, core42/jais-30b-v3, core42/jais-30b-chat-v3, etc.

LlamaForCausalLM

LLaMA, LLaMA-2, Vicuna, Alpaca, Yi

meta-llama/Llama-2-13b-hf, meta-llama/Llama-2-70b-hf, openlm-research/open_llama_13b, lmsys/vicuna-13b-v1.3, 01-ai/Yi-6B, 01-ai/Yi-34B, etc.

✅︎

MistralForCausalLM

Mistral, Mistral-Instruct

mistralai/Mistral-7B-v0.1, mistralai/Mistral-7B-Instruct-v0.1, etc.

✅︎

MixtralForCausalLM

Mixtral-8x7B, Mixtral-8x7B-Instruct

mistralai/Mixtral-8x7B-v0.1, mistralai/Mixtral-8x7B-Instruct-v0.1, etc.

✅︎

MPTForCausalLM

MPT, MPT-Instruct, MPT-Chat, MPT-StoryWriter

mosaicml/mpt-7b, mosaicml/mpt-7b-storywriter, mosaicml/mpt-30b, etc.

OLMoForCausalLM

OLMo

allenai/OLMo-1B, allenai/OLMo-7B, etc.

OPTForCausalLM

OPT, OPT-IML

facebook/opt-66b, facebook/opt-iml-max-30b, etc.

OrionForCausalLM

Orion

OrionStarAI/Orion-14B-Base, OrionStarAI/Orion-14B-Chat, etc.

PhiForCausalLM

Phi

microsoft/phi-1_5, microsoft/phi-2, etc.

QWenLMHeadModel

Qwen

Qwen/Qwen-7B, Qwen/Qwen-7B-Chat, etc.

Qwen2ForCausalLM

Qwen2

Qwen/Qwen2-beta-7B, Qwen/Qwen2-beta-7B-Chat, etc.

✅︎

Qwen2MoeForCausalLM

Qwen2MoE

Qwen/Qwen1.5-MoE-A2.7B, Qwen/Qwen1.5-MoE-A2.7B-Chat, etc.

StableLmForCausalLM

StableLM

stabilityai/stablelm-3b-4e1t/ , stabilityai/stablelm-base-alpha-7b-v2, etc.

If your model uses one of the above model architectures, you can seamlessly run your model with vLLM. Otherwise, please refer to Adding a New Model for instructions on how to implement support for your model. Alternatively, you can raise an issue on our GitHub project.

Note

Currently, the ROCm version of vLLM supports Mistral and Mixtral only for context lengths up to 4096.

Tip

The easiest way to check if your model is supported is to run the program below:

from vllm import LLM

llm = LLM(model=...)  # Name or path of your model
output = llm.generate("Hello, my name is")
print(output)

If vLLM successfully generates text, it indicates that your model is supported.

Tip

To use models from ModelScope instead of HuggingFace Hub, set an environment variable:

$ export VLLM_USE_MODELSCOPE=True

And use with trust_remote_code=True.

from vllm import LLM

llm = LLM(model=..., revision=..., trust_remote_code=True)  # Name or path of your model
output = llm.generate("Hello, my name is")
print(output)