Supported Models#

vLLM supports a variety of generative Transformer models in HuggingFace Transformers. The following is the list of model architectures that are currently supported by vLLM. Alongside each architecture, we include some popular models that use it.

Architecture

Models

Example HuggingFace Models

LoRA

AquilaForCausalLM

Aquila

BAAI/Aquila-7B, BAAI/AquilaChat-7B, etc.

✅︎

BaiChuanForCausalLM

Baichuan

baichuan-inc/Baichuan2-13B-Chat, baichuan-inc/Baichuan-7B, etc.

✅︎

ChatGLMModel

ChatGLM

THUDM/chatglm2-6b, THUDM/chatglm3-6b, etc.

✅︎

CohereForCausalLM

Command-R

CohereForAI/c4ai-command-r-v01, etc.

DbrxForCausalLM

DBRX

databricks/dbrx-base, databricks/dbrx-instruct, etc.

DeciLMForCausalLM

DeciLM

Deci/DeciLM-7B, Deci/DeciLM-7B-instruct, etc.

BloomForCausalLM

BLOOM, BLOOMZ, BLOOMChat

bigscience/bloom, bigscience/bloomz, etc.

FalconForCausalLM

Falcon

tiiuae/falcon-7b, tiiuae/falcon-40b, tiiuae/falcon-rw-7b, etc.

GemmaForCausalLM

Gemma

google/gemma-2b, google/gemma-7b, etc.

✅︎

GPT2LMHeadModel

GPT-2

gpt2, gpt2-xl, etc.

GPTBigCodeForCausalLM

StarCoder, SantaCoder, WizardCoder

bigcode/starcoder, bigcode/gpt_bigcode-santacoder, WizardLM/WizardCoder-15B-V1.0, etc.

GPTJForCausalLM

GPT-J

EleutherAI/gpt-j-6b, nomic-ai/gpt4all-j, etc.

GPTNeoXForCausalLM

GPT-NeoX, Pythia, OpenAssistant, Dolly V2, StableLM

EleutherAI/gpt-neox-20b, EleutherAI/pythia-12b, OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5, databricks/dolly-v2-12b, stabilityai/stablelm-tuned-alpha-7b, etc.

InternLMForCausalLM

InternLM

internlm/internlm-7b, internlm/internlm-chat-7b, etc.

✅︎

InternLM2ForCausalLM

InternLM2

internlm/internlm2-7b, internlm/internlm2-chat-7b, etc.

JAISLMHeadModel

Jais

core42/jais-13b, core42/jais-13b-chat, core42/jais-30b-v3, core42/jais-30b-chat-v3, etc.

LlamaForCausalLM

LLaMA, Llama 2, Meta Llama 3, Vicuna, Alpaca, Yi

meta-llama/Meta-Llama-3-8B-Instruct, meta-llama/Meta-Llama-3-70B-Instruct, meta-llama/Llama-2-13b-hf, meta-llama/Llama-2-70b-hf, openlm-research/open_llama_13b, lmsys/vicuna-13b-v1.3, 01-ai/Yi-6B, 01-ai/Yi-34B, etc.

✅︎

MiniCPMForCausalLM

MiniCPM

openbmb/MiniCPM-2B-sft-bf16, openbmb/MiniCPM-2B-dpo-bf16, etc.

MistralForCausalLM

Mistral, Mistral-Instruct

mistralai/Mistral-7B-v0.1, mistralai/Mistral-7B-Instruct-v0.1, etc.

✅︎

MixtralForCausalLM

Mixtral-8x7B, Mixtral-8x7B-Instruct

mistralai/Mixtral-8x7B-v0.1, mistralai/Mixtral-8x7B-Instruct-v0.1, mistral-community/Mixtral-8x22B-v0.1, etc.

✅︎

MPTForCausalLM

MPT, MPT-Instruct, MPT-Chat, MPT-StoryWriter

mosaicml/mpt-7b, mosaicml/mpt-7b-storywriter, mosaicml/mpt-30b, etc.

OLMoForCausalLM

OLMo

allenai/OLMo-1B-hf, allenai/OLMo-7B-hf, etc.

OPTForCausalLM

OPT, OPT-IML

facebook/opt-66b, facebook/opt-iml-max-30b, etc.

OrionForCausalLM

Orion

OrionStarAI/Orion-14B-Base, OrionStarAI/Orion-14B-Chat, etc.

PhiForCausalLM

Phi

microsoft/phi-1_5, microsoft/phi-2, etc.

Phi3ForCausalLM

Phi-3

microsoft/Phi-3-mini-4k-instruct, microsoft/Phi-3-mini-128k-instruct, etc.

QWenLMHeadModel

Qwen

Qwen/Qwen-7B, Qwen/Qwen-7B-Chat, etc.

Qwen2ForCausalLM

Qwen2

Qwen/Qwen2-beta-7B, Qwen/Qwen2-beta-7B-Chat, etc.

✅︎

Qwen2MoeForCausalLM

Qwen2MoE

Qwen/Qwen1.5-MoE-A2.7B, Qwen/Qwen1.5-MoE-A2.7B-Chat, etc.

StableLmForCausalLM

StableLM

stabilityai/stablelm-3b-4e1t/ , stabilityai/stablelm-base-alpha-7b-v2, etc.

If your model uses one of the above model architectures, you can seamlessly run your model with vLLM. Otherwise, please refer to Adding a New Model for instructions on how to implement support for your model. Alternatively, you can raise an issue on our GitHub project.

Note

Currently, the ROCm version of vLLM supports Mistral and Mixtral only for context lengths up to 4096.

Tip

The easiest way to check if your model is supported is to run the program below:

from vllm import LLM

llm = LLM(model=...)  # Name or path of your model
output = llm.generate("Hello, my name is")
print(output)

If vLLM successfully generates text, it indicates that your model is supported.

Tip

To use models from ModelScope instead of HuggingFace Hub, set an environment variable:

$ export VLLM_USE_MODELSCOPE=True

And use with trust_remote_code=True.

from vllm import LLM

llm = LLM(model=..., revision=..., trust_remote_code=True)  # Name or path of your model
output = llm.generate("Hello, my name is")
print(output)

Model Support Policy#

At vLLM, we are committed to facilitating the integration and support of third-party models within our ecosystem. Our approach is designed to balance the need for robustness and the practical limitations of supporting a wide range of models. Here’s how we manage third-party model support:

  1. Community-Driven Support: We encourage community contributions for adding new models. When a user requests support for a new model, we welcome pull requests (PRs) from the community. These contributions are evaluated primarily on the sensibility of the output they generate, rather than strict consistency with existing implementations such as those in transformers. Call for contribution: PRs coming directly from model vendors are greatly appreciated!

  2. Best-Effort Consistency: While we aim to maintain a level of consistency between the models implemented in vLLM and other frameworks like transformers, complete alignment is not always feasible. Factors like acceleration techniques and the use of low-precision computations can introduce discrepancies. Our commitment is to ensure that the implemented models are functional and produce sensible results.

  3. Issue Resolution and Model Updates: Users are encouraged to report any bugs or issues they encounter with third-party models. Proposed fixes should be submitted via PRs, with a clear explanation of the problem and the rationale behind the proposed solution. If a fix for one model impacts another, we rely on the community to highlight and address these cross-model dependencies. Note: for bugfix PRs, it is good etiquette to inform the original author to seek their feedback.

  4. Monitoring and Updates: Users interested in specific models should monitor the commit history for those models (e.g., by tracking changes in the main/vllm/model_executor/models directory). This proactive approach helps users stay informed about updates and changes that may affect the models they use.

  5. Selective Focus: Our resources are primarily directed towards models with significant user interest and impact. Models that are less frequently used may receive less attention, and we rely on the community to play a more active role in their upkeep and improvement.

Through this approach, vLLM fosters a collaborative environment where both the core development team and the broader community contribute to the robustness and diversity of the third-party models supported in our ecosystem.

Note that, as an inference engine, vLLM does not introduce new models. Therefore, all models supported by vLLM are third-party models in this regard.

We have the following levels of testing for models:

  1. Strict Consistency: We compare the output of the model with the output of the model in the HuggingFace Transformers library under greedy decoding. This is the most stringent test. Please refer to test_models.py and test_big_models.py for the models that have passed this test.

  2. Output Sensibility: We check if the output of the model is sensible and coherent, by measuring the perplexity of the output and checking for any obvious errors. This is a less stringent test.

  3. Runtime Functionality: We check if the model can be loaded and run without errors. This is the least stringent test. Please refer to functionality tests and examples for the models that have passed this test.

  4. Community Feedback: We rely on the community to provide feedback on the models. If a model is broken or not working as expected, we encourage users to raise issues to report it or open pull requests to fix it. The rest of the models fall under this category.