Supported Models#
vLLM supports a variety of generative Transformer models in HuggingFace Transformers. The following is the list of model architectures that are currently supported by vLLM. Alongside each architecture, we include some popular models that use it.
Architecture |
Models |
Example HuggingFace Models |
---|---|---|
|
Aquila |
|
|
Baichuan |
|
|
ChatGLM |
|
|
BLOOM, BLOOMZ, BLOOMChat |
|
|
Falcon |
|
|
GPT-2 |
|
|
StarCoder, SantaCoder, WizardCoder |
|
|
GPT-J |
|
|
GPT-NeoX, Pythia, OpenAssistant, Dolly V2, StableLM |
|
|
InternLM |
|
|
LLaMA, LLaMA-2, Vicuna, Alpaca, Koala, Guanaco |
|
|
Mistral, Mistral-Instruct |
|
|
MPT, MPT-Instruct, MPT-Chat, MPT-StoryWriter |
|
|
OPT, OPT-IML |
|
|
Phi-1.5 |
|
|
Qwen |
|
|
Yi |
|
If your model uses one of the above model architectures, you can seamlessly run your model with vLLM. Otherwise, please refer to Adding a New Model for instructions on how to implement support for your model. Alternatively, you can raise an issue on our GitHub project.
Tip
The easiest way to check if your model is supported is to run the program below:
from vllm import LLM
llm = LLM(model=...) # Name or path of your model
output = llm.generate("Hello, my name is")
print(output)
To use model from www.modelscope.cn
$ export VLLM_USE_MODELSCOPE=True
from vllm import LLM
llm = LLM(model=..., revision=..., trust_remote_code=True) # Name or path of your model
output = llm.generate("Hello, my name is")
print(output)
If vLLM successfully generates text, it indicates that your model is supported.