OpenAI-Compatible Server#

vLLM provides an HTTP server that implements OpenAI’s Completions API, Chat API, and more! This functionality lets you serve models and interact with them using an HTTP client.

In your terminal, you can install vLLM, then start the server with the vllm serve command. (You can also use our Docker image.)

vllm serve NousResearch/Meta-Llama-3-8B-Instruct --dtype auto --api-key token-abc123

To call the server, in your preferred text editor, create a script that uses an HTTP client. Include any messages that you want to send to the model. Then run that script. Below is an example script using the official OpenAI Python client.

from openai import OpenAI
client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="token-abc123",
)

completion = client.chat.completions.create(
  model="NousResearch/Meta-Llama-3-8B-Instruct",
  messages=[
    {"role": "user", "content": "Hello!"}
  ]
)

print(completion.choices[0].message)

Tip

vLLM supports some parameters that are not supported by OpenAI, top_k for example. You can pass these parameters to vLLM using the OpenAI client in the extra_body parameter of your requests, i.e. extra_body={"top_k": 50} for top_k.

Important

By default, the server applies generation_config.json from the Hugging Face model repository if it exists. This means the default values of certain sampling parameters can be overridden by those recommended by the model creator.

To disable this behavior, please pass --generation-config vllm when launching the server.

Supported APIs#

We currently support the following OpenAI APIs:

In addition, we have the following custom APIs:

Chat Template#

In order for the language model to support chat protocol, vLLM requires the model to include a chat template in its tokenizer configuration. The chat template is a Jinja2 template that specifies how are roles, messages, and other chat-specific tokens are encoded in the input.

An example chat template for NousResearch/Meta-Llama-3-8B-Instruct can be found here

Some models do not provide a chat template even though they are instruction/chat fine-tuned. For those model, you can manually specify their chat template in the --chat-template parameter with the file path to the chat template, or the template in string form. Without a chat template, the server will not be able to process chat and all chat requests will error.

vllm serve <model> --chat-template ./path-to-chat-template.jinja

vLLM community provides a set of chat templates for popular models. You can find them under the examples directory.

With the inclusion of multi-modal chat APIs, the OpenAI spec now accepts chat messages in a new format which specifies both a type and a text field. An example is provided below:

completion = client.chat.completions.create(
  model="NousResearch/Meta-Llama-3-8B-Instruct",
  messages=[
    {"role": "user", "content": [{"type": "text", "text": "Classify this sentiment: vLLM is wonderful!"}]}
  ]
)

Most chat templates for LLMs expect the content field to be a string, but there are some newer models like meta-llama/Llama-Guard-3-1B that expect the content to be formatted according to the OpenAI schema in the request. vLLM provides best-effort support to detect this automatically, which is logged as a string like “Detected the chat template content format to be…”, and internally converts incoming requests to match the detected format, which can be one of:

  • "string": A string.

    • Example: "Hello world"

  • "openai": A list of dictionaries, similar to OpenAI schema.

    • Example: [{"type": "text", "text": "Hello world!"}]

If the result is not what you expect, you can set the --chat-template-content-format CLI argument to override which format to use.

Extra Parameters#

vLLM supports a set of parameters that are not part of the OpenAI API. In order to use them, you can pass them as extra parameters in the OpenAI client. Or directly merge them into the JSON payload if you are using HTTP call directly.

completion = client.chat.completions.create(
  model="NousResearch/Meta-Llama-3-8B-Instruct",
  messages=[
    {"role": "user", "content": "Classify this sentiment: vLLM is wonderful!"}
  ],
  extra_body={
    "guided_choice": ["positive", "negative"]
  }
)

Extra HTTP Headers#

Only X-Request-Id HTTP request header is supported for now. It can be enabled with --enable-request-id-headers.

Note that enablement of the headers can impact performance significantly at high QPS rates. We recommend implementing HTTP headers at the router level (e.g. via Istio), rather than within the vLLM layer for this reason. See this PR for more details.

completion = client.chat.completions.create(
  model="NousResearch/Meta-Llama-3-8B-Instruct",
  messages=[
    {"role": "user", "content": "Classify this sentiment: vLLM is wonderful!"}
  ],
  extra_headers={
    "x-request-id": "sentiment-classification-00001",
  }
)
print(completion._request_id)

completion = client.completions.create(
  model="NousResearch/Meta-Llama-3-8B-Instruct",
  prompt="A robot may not injure a human being",
  extra_headers={
    "x-request-id": "completion-test",
  }
)
print(completion._request_id)

CLI Reference#

vllm serve#

The vllm serve command is used to launch the OpenAI-compatible server.

Tip

The vast majority of command-line arguments are based on those for offline inference.

See here for some common options.

usage: vllm serve [-h] [--host HOST] [--port PORT]
                  [--uvicorn-log-level {debug,info,warning,error,critical,trace}]
                  [--disable-uvicorn-access-log] [--allow-credentials]
                  [--allowed-origins ALLOWED_ORIGINS]
                  [--allowed-methods ALLOWED_METHODS]
                  [--allowed-headers ALLOWED_HEADERS] [--api-key API_KEY]
                  [--lora-modules LORA_MODULES [LORA_MODULES ...]]
                  [--prompt-adapters PROMPT_ADAPTERS [PROMPT_ADAPTERS ...]]
                  [--chat-template CHAT_TEMPLATE]
                  [--chat-template-content-format {auto,string,openai}]
                  [--response-role RESPONSE_ROLE] [--ssl-keyfile SSL_KEYFILE]
                  [--ssl-certfile SSL_CERTFILE] [--ssl-ca-certs SSL_CA_CERTS]
                  [--enable-ssl-refresh] [--ssl-cert-reqs SSL_CERT_REQS]
                  [--root-path ROOT_PATH] [--middleware MIDDLEWARE]
                  [--return-tokens-as-token-ids]
                  [--disable-frontend-multiprocessing]
                  [--enable-request-id-headers] [--enable-auto-tool-choice]
                  [--tool-call-parser {granite-20b-fc,granite,hermes,internlm,jamba,llama4_json,llama3_json,mistral,phi4_mini_json,pythonic} or name registered in --tool-parser-plugin]
                  [--tool-parser-plugin TOOL_PARSER_PLUGIN] [--model MODEL]
                  [--task {auto,generate,embedding,embed,classify,score,reward,transcription}]
                  [--tokenizer TOKENIZER] [--hf-config-path HF_CONFIG_PATH]
                  [--skip-tokenizer-init] [--revision REVISION]
                  [--code-revision CODE_REVISION]
                  [--tokenizer-revision TOKENIZER_REVISION]
                  [--tokenizer-mode {auto,slow,mistral,custom}]
                  [--trust-remote-code]
                  [--allowed-local-media-path ALLOWED_LOCAL_MEDIA_PATH]
                  [--load-format {auto,pt,safetensors,npcache,dummy,tensorizer,sharded_state,gguf,bitsandbytes,mistral,runai_streamer,runai_streamer_sharded,fastsafetensors}]
                  [--download-dir DOWNLOAD_DIR]
                  [--model-loader-extra-config MODEL_LOADER_EXTRA_CONFIG]
                  [--use-tqdm-on-load | --no-use-tqdm-on-load]
                  [--config-format {auto,hf,mistral}]
                  [--dtype {auto,half,float16,bfloat16,float,float32}]
                  [--max-model-len MAX_MODEL_LEN]
                  [--guided-decoding-backend {auto,guidance,xgrammar}]
                  [--reasoning-parser {deepseek_r1,granite}]
                  [--logits-processor-pattern LOGITS_PROCESSOR_PATTERN]
                  [--model-impl {auto,vllm,transformers}]
                  [--distributed-executor-backend {external_launcher,mp,ray,uni,None}]
                  [--pipeline-parallel-size PIPELINE_PARALLEL_SIZE]
                  [--tensor-parallel-size TENSOR_PARALLEL_SIZE]
                  [--data-parallel-size DATA_PARALLEL_SIZE]
                  [--enable-expert-parallel | --no-enable-expert-parallel]
                  [--max-parallel-loading-workers MAX_PARALLEL_LOADING_WORKERS]
                  [--ray-workers-use-nsight | --no-ray-workers-use-nsight]
                  [--disable-custom-all-reduce | --no-disable-custom-all-reduce]
                  [--block-size {1,8,16,32,64,128}]
                  [--gpu-memory-utilization GPU_MEMORY_UTILIZATION]
                  [--swap-space SWAP_SPACE]
                  [--kv-cache-dtype {auto,fp8,fp8_e4m3,fp8_e5m2}]
                  [--num-gpu-blocks-override NUM_GPU_BLOCKS_OVERRIDE]
                  [--enable-prefix-caching | --no-enable-prefix-caching]
                  [--prefix-caching-hash-algo {builtin,sha256}]
                  [--cpu-offload-gb CPU_OFFLOAD_GB]
                  [--calculate-kv-scales | --no-calculate-kv-scales]
                  [--disable-sliding-window] [--use-v2-block-manager]
                  [--seed SEED] [--max-logprobs MAX_LOGPROBS]
                  [--disable-log-stats]
                  [--quantization {aqlm,awq,deepspeedfp,tpu_int8,fp8,ptpc_fp8,fbgemm_fp8,modelopt,nvfp4,marlin,bitblas,gguf,gptq_marlin_24,gptq_marlin,gptq_bitblas,awq_marlin,gptq,compressed-tensors,bitsandbytes,qqq,hqq,experts_int8,neuron_quant,ipex,quark,moe_wna16,torchao,None}]
                  [--rope-scaling ROPE_SCALING] [--rope-theta ROPE_THETA]
                  [--hf-token [HF_TOKEN]] [--hf-overrides HF_OVERRIDES]
                  [--enforce-eager]
                  [--max-seq-len-to-capture MAX_SEQ_LEN_TO_CAPTURE]
                  [--tokenizer-pool-size TOKENIZER_POOL_SIZE]
                  [--tokenizer-pool-type TOKENIZER_POOL_TYPE]
                  [--tokenizer-pool-extra-config TOKENIZER_POOL_EXTRA_CONFIG]
                  [--limit-mm-per-prompt LIMIT_MM_PER_PROMPT]
                  [--mm-processor-kwargs MM_PROCESSOR_KWARGS]
                  [--disable-mm-preprocessor-cache]
                  [--enable-lora | --no-enable-lora]
                  [--enable-lora-bias | --no-enable-lora-bias]
                  [--max-loras MAX_LORAS] [--max-lora-rank MAX_LORA_RANK]
                  [--lora-extra-vocab-size LORA_EXTRA_VOCAB_SIZE]
                  [--lora-dtype {auto,bfloat16,float16}]
                  [--long-lora-scaling-factors LONG_LORA_SCALING_FACTORS [LONG_LORA_SCALING_FACTORS ...]]
                  [--max-cpu-loras MAX_CPU_LORAS]
                  [--fully-sharded-loras | --no-fully-sharded-loras]
                  [--enable-prompt-adapter | --no-enable-prompt-adapter]
                  [--max-prompt-adapters MAX_PROMPT_ADAPTERS]
                  [--max-prompt-adapter-token MAX_PROMPT_ADAPTER_TOKEN]
                  [--device {auto,cpu,cuda,hpu,neuron,tpu,xpu}]
                  [--speculative-config SPECULATIVE_CONFIG]
                  [--ignore-patterns IGNORE_PATTERNS]
                  [--served-model-name SERVED_MODEL_NAME [SERVED_MODEL_NAME ...]]
                  [--qlora-adapter-name-or-path QLORA_ADAPTER_NAME_OR_PATH]
                  [--show-hidden-metrics-for-version SHOW_HIDDEN_METRICS_FOR_VERSION]
                  [--otlp-traces-endpoint OTLP_TRACES_ENDPOINT]
                  [--collect-detailed-traces COLLECT_DETAILED_TRACES]
                  [--disable-async-output-proc]
                  [--max-num-batched-tokens MAX_NUM_BATCHED_TOKENS]
                  [--max-num-seqs MAX_NUM_SEQS]
                  [--max-num-partial-prefills MAX_NUM_PARTIAL_PREFILLS]
                  [--max-long-partial-prefills MAX_LONG_PARTIAL_PREFILLS]
                  [--long-prefill-token-threshold LONG_PREFILL_TOKEN_THRESHOLD]
                  [--num-lookahead-slots NUM_LOOKAHEAD_SLOTS]
                  [--scheduler-delay-factor SCHEDULER_DELAY_FACTOR]
                  [--preemption-mode {recompute,swap,None}]
                  [--num-scheduler-steps NUM_SCHEDULER_STEPS]
                  [--multi-step-stream-outputs | --no-multi-step-stream-outputs]
                  [--scheduling-policy {fcfs,priority}]
                  [--enable-chunked-prefill | --no-enable-chunked-prefill]
                  [--disable-chunked-mm-input | --no-disable-chunked-mm-input]
                  [--scheduler-cls SCHEDULER_CLS]
                  [--override-neuron-config OVERRIDE_NEURON_CONFIG]
                  [--override-pooler-config OVERRIDE_POOLER_CONFIG]
                  [--compilation-config COMPILATION_CONFIG]
                  [--kv-transfer-config KV_TRANSFER_CONFIG]
                  [--worker-cls WORKER_CLS]
                  [--worker-extension-cls WORKER_EXTENSION_CLS]
                  [--generation-config GENERATION_CONFIG]
                  [--override-generation-config OVERRIDE_GENERATION_CONFIG]
                  [--enable-sleep-mode]
                  [--additional-config ADDITIONAL_CONFIG] [--enable-reasoning]
                  [--disable-cascade-attn] [--disable-log-requests]
                  [--max-log-len MAX_LOG_LEN] [--disable-fastapi-docs]
                  [--enable-prompt-tokens-details]
                  [--enable-server-load-tracking]

Named Arguments#

--host

Host name.

--port

Port number.

Default: 8000

--uvicorn-log-level

Possible choices: debug, info, warning, error, critical, trace

Log level for uvicorn.

Default: “info”

--disable-uvicorn-access-log

Disable uvicorn access log.

Default: False

--allow-credentials

Allow credentials.

Default: False

--allowed-origins

Allowed origins.

Default: [‘*’]

--allowed-methods

Allowed methods.

Default: [‘*’]

--allowed-headers

Allowed headers.

Default: [‘*’]

--api-key

If provided, the server will require this key to be presented in the header.

--lora-modules

LoRA module configurations in either ‘name=path’ formator JSON format. Example (old format): 'name=path' Example (new format): {"name": "name", "path": "lora_path", "base_model_name": "id"}

--prompt-adapters

Prompt adapter configurations in the format name=path. Multiple adapters can be specified.

--chat-template

The file path to the chat template, or the template in single-line form for the specified model.

--chat-template-content-format

Possible choices: auto, string, openai

The format to render message content within a chat template.

  • “string” will render the content as a string. Example: "Hello World"

  • “openai” will render the content as a list of dictionaries, similar to OpenAI schema. Example: [{"type": "text", "text": "Hello world!"}]

Default: “auto”

--response-role

The role name to return if request.add_generation_prompt=true.

Default: assistant

--ssl-keyfile

The file path to the SSL key file.

--ssl-certfile

The file path to the SSL cert file.

--ssl-ca-certs

The CA certificates file.

--enable-ssl-refresh

Refresh SSL Context when SSL certificate files change

Default: False

--ssl-cert-reqs

Whether client certificate is required (see stdlib ssl module’s).

Default: 0

--root-path

FastAPI root_path when app is behind a path based routing proxy.

--middleware

Additional ASGI middleware to apply to the app. We accept multiple –middleware arguments. The value should be an import path. If a function is provided, vLLM will add it to the server using @app.middleware('http'). If a class is provided, vLLM will add it to the server using app.add_middleware().

Default: []

--return-tokens-as-token-ids

When --max-logprobs is specified, represents single tokens as strings of the form ‘token_id:{token_id}’ so that tokens that are not JSON-encodable can be identified.

Default: False

--disable-frontend-multiprocessing

If specified, will run the OpenAI frontend server in the same process as the model serving engine.

Default: False

--enable-request-id-headers

If specified, API server will add X-Request-Id header to responses. Caution: this hurts performance at high QPS.

Default: False

--enable-auto-tool-choice

Enable auto tool choice for supported models. Use --tool-call-parser to specify which parser to use.

Default: False

--tool-call-parser

Select the tool call parser depending on the model that you’re using. This is used to parse the model-generated tool call into OpenAI API format. Required for --enable-auto-tool-choice.

--tool-parser-plugin

Special the tool parser plugin write to parse the model-generated tool into OpenAI API format, the name register in this plugin can be used in --tool-call-parser.

Default: “”

--model

Name or path of the huggingface model to use.

Default: “facebook/opt-125m”

--task

Possible choices: auto, generate, embedding, embed, classify, score, reward, transcription

The task to use the model for. Each vLLM instance only supports one task, even if the same model can be used for multiple tasks. When the model only supports one task, "auto" can be used to select it; otherwise, you must specify explicitly which task to use.

Default: “auto”

--tokenizer

Name or path of the huggingface tokenizer to use. If unspecified, model name or path will be used.

--hf-config-path

Name or path of the huggingface config to use. If unspecified, model name or path will be used.

--skip-tokenizer-init

Skip initialization of tokenizer and detokenizer. Expects valid prompt_token_ids and None for prompt from the input. The generated output will contain token ids.

Default: False

--revision

The specific model version to use. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.

--code-revision

The specific revision to use for the model code on Hugging Face Hub. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.

--tokenizer-revision

Revision of the huggingface tokenizer to use. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.

--tokenizer-mode

Possible choices: auto, slow, mistral, custom

The tokenizer mode.

  • “auto” will use the fast tokenizer if available.

  • “slow” will always use the slow tokenizer.

  • “mistral” will always use the mistral_common tokenizer.

  • “custom” will use –tokenizer to select the preregistered tokenizer.

Default: “auto”

--trust-remote-code

Trust remote code from huggingface.

Default: False

--allowed-local-media-path

Allowing API requests to read local images or videos from directories specified by the server file system. This is a security risk. Should only be enabled in trusted environments.

--config-format

Possible choices: auto, hf, mistral

The format of the model config to load.

  • “auto” will try to load the config in hf format if available else it will try to load in mistral format

Default: “ConfigFormat.AUTO”

--dtype

Possible choices: auto, half, float16, bfloat16, float, float32

Data type for model weights and activations.

  • “auto” will use FP16 precision for FP32 and FP16 models, and BF16 precision for BF16 models.

  • “half” for FP16. Recommended for AWQ quantization.

  • “float16” is the same as “half”.

  • “bfloat16” for a balance between precision and range.

  • “float” is shorthand for FP32 precision.

  • “float32” for FP32 precision.

Default: “auto”

--max-model-len

Model context length. If unspecified, will be automatically derived from the model config. Supports k/m/g/K/M/G in human-readable format. Examples: - 1k → 1000 - 1K → 1024

--logits-processor-pattern

Optional regex pattern specifying valid logits processor qualified names that can be passed with the logits_processors extra completion argument. Defaults to None, which allows no processors.

--model-impl

Possible choices: auto, vllm, transformers

Which implementation of the model to use.

  • “auto” will try to use the vLLM implementation if it exists and fall back to the Transformers implementation if no vLLM implementation is available.

  • “vllm” will use the vLLM model implementation.

  • “transformers” will use the Transformers model implementation.

Default: “auto”

--disable-sliding-window

Disables sliding window, capping to sliding window size.

Default: False

--use-v2-block-manager

[DEPRECATED] block manager v1 has been removed and SelfAttnBlockSpaceManager (i.e. block manager v2) is now the default. Setting this flag to True or False has no effect on vLLM behavior.

Default: True

--seed

Random seed for operations.

--max-logprobs

Max number of log probs to return logprobs is specified in SamplingParams.

Default: 20

--disable-log-stats

Disable logging statistics.

Default: False

--quantization, -q

Possible choices: aqlm, awq, deepspeedfp, tpu_int8, fp8, ptpc_fp8, fbgemm_fp8, modelopt, nvfp4, marlin, bitblas, gguf, gptq_marlin_24, gptq_marlin, gptq_bitblas, awq_marlin, gptq, compressed-tensors, bitsandbytes, qqq, hqq, experts_int8, neuron_quant, ipex, quark, moe_wna16, torchao, None

Method used to quantize the weights. If None, we first check the quantization_config attribute in the model config file. If that is None, we assume the model weights are not quantized and use dtype to determine the data type of the weights.

--rope-scaling

RoPE scaling configuration in JSON format. For example, {"rope_type":"dynamic","factor":2.0}

--rope-theta

RoPE theta. Use with rope_scaling. In some cases, changing the RoPE theta improves the performance of the scaled model.

--hf-token

The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).

--hf-overrides

Extra arguments for the HuggingFace config. This should be a JSON string that will be parsed into a dictionary.

--enforce-eager

Always use eager-mode PyTorch. If False, will use eager mode and CUDA graph in hybrid for maximal performance and flexibility.

Default: False

--max-seq-len-to-capture

Maximum sequence length covered by CUDA graphs. When a sequence has context length larger than this, we fall back to eager mode. Additionally for encoder-decoder models, if the sequence length of the encoder input is larger than this, we fall back to the eager mode.

Default: 8192

--mm-processor-kwargs

Overrides for the multi-modal processor obtained from AutoProcessor.from_pretrained. The available overrides depend on the model that is being run.For example, for Phi-3-Vision: {"num_crops": 4}.

--disable-mm-preprocessor-cache

If True, disable caching of the processed multi-modal inputs.

Default: False

--ignore-patterns

The pattern(s) to ignore when loading the model.Default to original/**/* to avoid repeated loading of llama’s checkpoints.

Default: []

--served-model-name

The model name(s) used in the API. If multiple names are provided, the server will respond to any of the provided names. The model name in the model field of a response will be the first name in this list. If not specified, the model name will be the same as the --model argument. Noted that this name(s) will also be used in model_name tag content of prometheus metrics, if multiple names provided, metrics tag will take the first one.

--qlora-adapter-name-or-path

Name or path of the QLoRA adapter.

--show-hidden-metrics-for-version

Enable deprecated Prometheus metrics that have been hidden since the specified version. For example, if a previously deprecated metric has been hidden since the v0.7.0 release, you use –show-hidden-metrics-for-version=0.7 as a temporary escape hatch while you migrate to new metrics. The metric is likely to be removed completely in an upcoming release.

--otlp-traces-endpoint

Target URL to which OpenTelemetry traces will be sent.

--collect-detailed-traces

Valid choices are model,worker,all. It makes sense to set this only if --otlp-traces-endpoint is set. If set, it will collect detailed traces for the specified modules. This involves use of possibly costly and or blocking operations and hence might have a performance impact.

--disable-async-output-proc

Disable async output processing. This may result in lower performance.

Default: False

--scheduler-cls

The scheduler class to use. “vllm.core.scheduler.Scheduler” is the default scheduler. Can be a class directly or the path to a class of form “mod.custom_class”.

Default: “vllm.core.scheduler.Scheduler”

--override-neuron-config

Override or set neuron device configuration. e.g. {"cast_logits_dtype": "bloat16"}.

--override-pooler-config

Override or set the pooling method for pooling models. e.g. {"pooling_type": "mean", "normalize": false}.

--compilation-config, -O

torch.compile configuration for the model.When it is a number (0, 1, 2, 3), it will be interpreted as the optimization level. NOTE: level 0 is the default level without any optimization. level 1 and 2 are for internal testing only. level 3 is the recommended level for production. To specify the full compilation config, use a JSON string, e.g. {"level": 3, "cudagraph_capture_sizes": [1, 2, 4, 8]} Following the convention of traditional compilers, using -O without space is also supported. -O3 is equivalent to -O 3.

--kv-transfer-config

The configurations for distributed KV cache transfer. Should be a JSON string.

--worker-cls

The worker class to use for distributed execution.

Default: “auto”

--worker-extension-cls

The worker extension class on top of the worker cls, it is useful if you just want to add new functions to the worker class without changing the existing functions.

Default: “”

--generation-config

The folder path to the generation config. Defaults to ‘auto’, the generation config will be loaded from model path. If set to ‘vllm’, no generation config is loaded, vLLM defaults will be used. If set to a folder path, the generation config will be loaded from the specified folder path. If max_new_tokens is specified in generation config, then it sets a server-wide limit on the number of output tokens for all requests.

Default: auto

--override-generation-config

Overrides or sets generation config in JSON format. e.g. {"temperature": 0.5}. If used with –generation-config=auto, the override parameters will be merged with the default config from the model. If generation-config is None, only the override parameters are used.

--enable-sleep-mode

Enable sleep mode for the engine. (only cuda platform is supported)

Default: False

--additional-config

Additional config for specified platform in JSON format. Different platforms may support different configs. Make sure the configs are valid for the platform you are using. The input format is like ‘{“config_key”:”config_value”}’

--enable-reasoning

Whether to enable reasoning_content for the model. If enabled, the model will be able to generate reasoning content.

Default: False

--disable-cascade-attn

Disable cascade attention for V1. While cascade attention does not change the mathematical correctness, disabling it could be useful for preventing potential numerical issues. Note that even if this is set to False, cascade attention will be only used when the heuristic tells that it’s beneficial.

Default: False

--disable-log-requests

Disable logging requests.

Default: False

--max-log-len

Max number of prompt characters or prompt ID numbers being printed in log. The default of None means unlimited.

--disable-fastapi-docs

Disable FastAPI’s OpenAPI schema, Swagger UI, and ReDoc endpoint.

Default: False

--enable-prompt-tokens-details

If set to True, enable prompt_tokens_details in usage.

Default: False

--enable-server-load-tracking

If set to True, enable tracking server_load_metrics in the app state.

Default: False

LoadConfig#

Configuration for loading the model weights.

--load-format

Possible choices: auto, pt, safetensors, npcache, dummy, tensorizer, sharded_state, gguf, bitsandbytes, mistral, runai_streamer, runai_streamer_sharded, fastsafetensors

The format of the model weights to load:

  • “auto” will try to load the weights in the safetensors format and fall

back to the pytorch bin format if safetensors format is not available.

  • “pt” will load the weights in the pytorch bin format.

  • “safetensors” will load the weights in the safetensors format.

  • “npcache” will load the weights in pytorch format and store a numpy cache

to speed up the loading.

  • “dummy” will initialize the weights with random values, which is mainly

for profiling.

  • “tensorizer” will use CoreWeave’s tensorizer library for fast weight

loading. See the Tensorize vLLM Model script in the Examples section for more information.

  • “runai_streamer” will load the Safetensors weights using Run:ai Model

Streamer.

  • “bitsandbytes” will load the weights using bitsandbytes quantization.

  • “sharded_state” will load weights from pre-sharded checkpoint files,

supporting efficient loading of tensor-parallel models.

  • “gguf” will load weights from GGUF format files (details specified in

ggml-org/ggml).

  • “mistral” will load weights from consolidated safetensors files used by

Mistral models.

Default: “auto”

--download-dir

Directory to download and load the weights, default to the default cache directory of Hugging Face.

--model-loader-extra-config

Extra config for model loader. This will be passed to the model loader corresponding to the chosen load_format. This should be a JSON string that will be parsed into a dictionary.

Default: {}

--use-tqdm-on-load, --no-use-tqdm-on-load

Whether to enable tqdm for showing progress bar when loading model weights.

Default: True

DecodingConfig#

Dataclass which contains the decoding strategy of the engine.

--guided-decoding-backend

Possible choices: auto, guidance, xgrammar

Which engine will be used for guided decoding (JSON schema / regex etc) by default. With “auto”, we will make opinionated choices based on request contents and what the backend libraries currently support, so the behavior is subject to change in each release.

Default: “auto”

--reasoning-parser

Possible choices: deepseek_r1, granite

Select the reasoning parser depending on the model that you’re using. This is used to parse the reasoning content into OpenAI API format. Required for –enable-reasoning.

ParallelConfig#

Configuration for the distributed execution.

--distributed-executor-backend

Possible choices: external_launcher, mp, ray, uni, None

Backend to use for distributed model workers, either “ray” or “mp” (multiprocessing). If the product of pipeline_parallel_size and tensor_parallel_size is less than or equal to the number of GPUs available, “mp” will be used to keep processing on a single host. Otherwise, this will default to “ray” if Ray is installed and fail otherwise. Note that tpu and hpu only support Ray for distributed inference.

--pipeline-parallel-size, -pp

Number of pipeline parallel groups.

Default: 1

--tensor-parallel-size, -tp

Number of tensor parallel groups.

Default: 1

--data-parallel-size, -dp

Number of data parallel groups. MoE layers will be sharded according to the product of the tensor parallel size and data parallel size.

Default: 1

--enable-expert-parallel, --no-enable-expert-parallel

Use expert parallelism instead of tensor parallelism for MoE layers.

Default: False

--max-parallel-loading-workers

Maximum number of parallal loading workers when loading model sequentially in multiple batches. To avoid RAM OOM when using tensor parallel and large models.

--ray-workers-use-nsight, --no-ray-workers-use-nsight

Whether to profile Ray workers with nsight, see https://docs.ray.io/en/latest/ray-observability/user-guides/profiling.html#profiling-nsight-profiler.

Default: False

--disable-custom-all-reduce, --no-disable-custom-all-reduce

Disable the custom all-reduce kernel and fall back to NCCL.

Default: False

CacheConfig#

Configuration for the KV cache.

--block-size

Possible choices: 1, 8, 16, 32, 64, 128

Size of a contiguous cache block in number of tokens. This is ignored on neuron devices and set to –max-model-len. On CUDA devices, only block sizes up to 32 are supported. On HPU devices, block size defaults to 128.

This config has no static default. If left unspecified by the user, it will be set in Platform.check_and_update_configs() based on the current platform.

--gpu-memory-utilization

The fraction of GPU memory to be used for the model executor, which can range from 0 to 1. For example, a value of 0.5 would imply 50% GPU memory utilization. If unspecified, will use the default value of 0.9. This is a per-instance limit, and only applies to the current vLLM instance. It does not matter if you have another vLLM instance running on the same GPU. For example, if you have two vLLM instances running on the same GPU, you can set the GPU memory utilization to 0.5 for each instance.

Default: 0.9

--swap-space

Size of the CPU swap space per GPU (in GiB).

Default: 4

--kv-cache-dtype

Possible choices: auto, fp8, fp8_e4m3, fp8_e5m2

Data type for kv cache storage. If “auto”, will use model data type. CUDA 11.8+ supports fp8 (=fp8_e4m3) and fp8_e5m2. ROCm (AMD GPU) supports fp8 (=fp8_e4m3).

Default: “auto”

--num-gpu-blocks-override

Number of GPU blocks to use. This overrides the profiled num_gpu_blocks if specified. Does nothing if None. Used for testing preemption.

--enable-prefix-caching, --no-enable-prefix-caching

Whether to enable prefix caching. Disabled by default for V0. Enabled by default for V1.

--prefix-caching-hash-algo

Possible choices: builtin, sha256

Set the hash algorithm for prefix caching:

  • “builtin” is Python’s built-in hash.

  • “sha256” is collision resistant but with certain overheads.

Default: “builtin”

--cpu-offload-gb

The space in GiB to offload to CPU, per GPU. Default is 0, which means no offloading. Intuitively, this argument can be seen as a virtual way to increase the GPU memory size. For example, if you have one 24 GB GPU and set this to 10, virtually you can think of it as a 34 GB GPU. Then you can load a 13B model with BF16 weight, which requires at least 26GB GPU memory. Note that this requires fast CPU-GPU interconnect, as part of the model is loaded from CPU memory to GPU memory on the fly in each model forward pass.

Default: 0

--calculate-kv-scales, --no-calculate-kv-scales

This enables dynamic calculation of k_scale and v_scale when kv_cache_dtype is fp8. If False, the scales will be loaded from the model checkpoint if available. Otherwise, the scales will default to 1.0.

Default: False

TokenizerPoolConfig#

This config is deprecated and will be removed in a future release.

Passing these parameters will have no effect. Please remove them from your configurations.

--tokenizer-pool-size

This parameter is deprecated and will be removed in a future release. Passing this parameter will have no effect. Please remove it from your configurations.

Default: 0

--tokenizer-pool-type

This parameter is deprecated and will be removed in a future release. Passing this parameter will have no effect. Please remove it from your configurations.

Default: “ray”

--tokenizer-pool-extra-config

This parameter is deprecated and will be removed in a future release. Passing this parameter will have no effect. Please remove it from your configurations.

Default: {}

MultiModalConfig#

Controls the behavior of multimodal models.

--limit-mm-per-prompt

The maximum number of input items allowed per prompt for each modality. This should be a JSON string that will be parsed into a dictionary. Defaults to 1 (V0) or 999 (V1) for each modality.

For example, to allow up to 16 images and 2 videos per prompt: {"images": 16, "videos": 2}

Default: {}

LoRAConfig#

Configuration for LoRA.

--enable-lora, --no-enable-lora

If True, enable handling of LoRA adapters.

--enable-lora-bias, --no-enable-lora-bias

Enable bias for LoRA adapters.

Default: False

--max-loras

Max number of LoRAs in a single batch.

Default: 1

--max-lora-rank

Max LoRA rank.

Default: 16

--lora-extra-vocab-size

Maximum size of extra vocabulary that can be present in a LoRA adapter (added to the base model vocabulary).

Default: 256

--lora-dtype

Possible choices: auto, bfloat16, float16

Data type for LoRA. If auto, will default to base model dtype.

Default: “auto”

--long-lora-scaling-factors

Specify multiple scaling factors (which can be different from base model scaling factor - see eg. Long LoRA) to allow for multiple LoRA adapters trained with those scaling factors to be used at the same time. If not specified, only adapters trained with the base model scaling factor are allowed.

--max-cpu-loras

Maximum number of LoRAs to store in CPU memory. Must be >= than max_loras.

--fully-sharded-loras, --no-fully-sharded-loras

By default, only half of the LoRA computation is sharded with tensor parallelism. Enabling this will use the fully sharded layers. At high sequence length, max rank or tensor parallel size, this is likely faster.

Default: False

PromptAdapterConfig#

Configuration for PromptAdapters.

--enable-prompt-adapter, --no-enable-prompt-adapter

If True, enable handling of PromptAdapters.

--max-prompt-adapters

Max number of PromptAdapters in a batch.

Default: 1

--max-prompt-adapter-token

Max number of PromptAdapters tokens.

Default: 0

DeviceConfig#

Configuration for the device to use for vLLM execution.

--device

Possible choices: auto, cpu, cuda, hpu, neuron, tpu, xpu

Device type for vLLM execution.

Default: “auto”

SpeculativeConfig#

Configuration for speculative decoding.

--speculative-config

The configurations for speculative decoding. Should be a JSON string.

SchedulerConfig#

Scheduler configuration.

--max-num-batched-tokens

Maximum number of tokens to be processed in a single iteration.

This config has no static default. If left unspecified by the user, it will be set in EngineArgs.create_engine_config based on the usage context.

--max-num-seqs

Maximum number of sequences to be processed in a single iteration.

This config has no static default. If left unspecified by the user, it will be set in EngineArgs.create_engine_config based on the usage context.

--max-num-partial-prefills

For chunked prefill, the maximum number of sequences that can be partially prefilled concurrently.

Default: 1

--max-long-partial-prefills

For chunked prefill, the maximum number of prompts longer than long_prefill_token_threshold that will be prefilled concurrently. Setting this less than max_num_partial_prefills will allow shorter prompts to jump the queue in front of longer prompts in some cases, improving latency.

Default: 1

--long-prefill-token-threshold

For chunked prefill, a request is considered long if the prompt is longer than this number of tokens.

Default: 0

--num-lookahead-slots

The number of slots to allocate per sequence per step, beyond the known token ids. This is used in speculative decoding to store KV activations of tokens which may or may not be accepted.

NOTE: This will be replaced by speculative config in the future; it is present to enable correctness tests until then.

Default: 0

--scheduler-delay-factor

Apply a delay (of delay factor multiplied by previous prompt latency) before scheduling next prompt.

Default: 0.0

--preemption-mode

Possible choices: recompute, swap, None

Whether to perform preemption by swapping or recomputation. If not specified, we determine the mode as follows: We use recomputation by default since it incurs lower overhead than swapping. However, when the sequence group has multiple sequences (e.g., beam search), recomputation is not currently supported. In such a case, we use swapping instead.

--num-scheduler-steps

Maximum number of forward steps per scheduler call.

Default: 1

--multi-step-stream-outputs, --no-multi-step-stream-outputs

If False, then multi-step will stream outputs at the end of all steps

Default: True

--scheduling-policy

Possible choices: fcfs, priority

The scheduling policy to use:

  • “fcfs” means first come first served, i.e. requests are handled in order

of arrival.

  • “priority” means requests are handled based on given priority (lower

value means earlier handling) and time of arrival deciding any ties).

Default: “fcfs”

--enable-chunked-prefill, --no-enable-chunked-prefill

If True, prefill requests can be chunked based on the remaining max_num_batched_tokens.

--disable-chunked-mm-input, --no-disable-chunked-mm-input

If set to true and chunked prefill is enabled, we do not want to partially schedule a multimodal item. Only used in V1 This ensures that if a request has a mixed prompt (like text tokens TTTT followed by image tokens IIIIIIIIII) where only some image tokens can be scheduled (like TTTTIIIII, leaving IIIII), it will be scheduled as TTTT in one step and IIIIIIIIII in the next.

Default: False

Configuration file#

You can load CLI arguments via a YAML config file. The argument names must be the long form of those outlined above.

For example:

# config.yaml

model: meta-llama/Llama-3.1-8B-Instruct
host: "127.0.0.1"
port: 6379
uvicorn-log-level: "info"

To use the above config file:

vllm serve --config config.yaml

Note

In case an argument is supplied simultaneously using command line and the config file, the value from the command line will take precedence. The order of priorities is command line > config file values > defaults. e.g. vllm serve SOME_MODEL --config config.yaml, SOME_MODEL takes precedence over model in config file.

API Reference#

Completions API#

Our Completions API is compatible with OpenAI’s Completions API; you can use the official OpenAI Python client to interact with it.

Code example: examples/online_serving/openai_completion_client.py

Extra parameters#

The following sampling parameters are supported.

    use_beam_search: bool = False
    top_k: Optional[int] = None
    min_p: Optional[float] = None
    repetition_penalty: Optional[float] = None
    length_penalty: float = 1.0
    stop_token_ids: Optional[list[int]] = Field(default_factory=list)
    include_stop_str_in_output: bool = False
    ignore_eos: bool = False
    min_tokens: int = 0
    skip_special_tokens: bool = True
    spaces_between_special_tokens: bool = True
    truncate_prompt_tokens: Optional[Annotated[int, Field(ge=1)]] = None
    allowed_token_ids: Optional[list[int]] = None
    prompt_logprobs: Optional[int] = None

The following extra parameters are supported:

    add_special_tokens: bool = Field(
        default=True,
        description=(
            "If true (the default), special tokens (e.g. BOS) will be added to "
            "the prompt."),
    )
    response_format: Optional[AnyResponseFormat] = Field(
        default=None,
        description=(
            "Similar to chat completion, this parameter specifies the format "
            "of output. Only {'type': 'json_object'}, {'type': 'json_schema'}"
            ", {'type': 'structural_tag'}, or {'type': 'text' } is supported."
        ),
    )
    guided_json: Optional[Union[str, dict, BaseModel]] = Field(
        default=None,
        description="If specified, the output will follow the JSON schema.",
    )
    guided_regex: Optional[str] = Field(
        default=None,
        description=(
            "If specified, the output will follow the regex pattern."),
    )
    guided_choice: Optional[list[str]] = Field(
        default=None,
        description=(
            "If specified, the output will be exactly one of the choices."),
    )
    guided_grammar: Optional[str] = Field(
        default=None,
        description=(
            "If specified, the output will follow the context free grammar."),
    )
    guided_decoding_backend: Optional[str] = Field(
        default=None,
        description=(
            "If specified, will override the default guided decoding backend "
            "of the server for this specific request. If set, must be one of "
            "'outlines' / 'lm-format-enforcer'"),
    )
    guided_whitespace_pattern: Optional[str] = Field(
        default=None,
        description=(
            "If specified, will override the default whitespace pattern "
            "for guided json decoding."),
    )
    priority: int = Field(
        default=0,
        description=(
            "The priority of the request (lower means earlier handling; "
            "default: 0). Any priority other than 0 will raise an error "
            "if the served model does not use priority scheduling."),
    )
    logits_processors: Optional[LogitsProcessors] = Field(
        default=None,
        description=(
            "A list of either qualified names of logits processors, or "
            "constructor objects, to apply when sampling. A constructor is "
            "a JSON object with a required 'qualname' field specifying the "
            "qualified name of the processor class/factory, and optional "
            "'args' and 'kwargs' fields containing positional and keyword "
            "arguments. For example: {'qualname': "
            "'my_module.MyLogitsProcessor', 'args': [1, 2], 'kwargs': "
            "{'param': 'value'}}."))

    return_tokens_as_token_ids: Optional[bool] = Field(
        default=None,
        description=(
            "If specified with 'logprobs', tokens are represented "
            " as strings of the form 'token_id:{token_id}' so that tokens "
            "that are not JSON-encodable can be identified."))

Chat API#

Our Chat API is compatible with OpenAI’s Chat Completions API; you can use the official OpenAI Python client to interact with it.

We support both Vision- and Audio-related parameters; see our Multimodal Inputs guide for more information.

  • Note: image_url.detail parameter is not supported.

Code example: examples/online_serving/openai_chat_completion_client.py

Extra parameters#

The following sampling parameters are supported.

    best_of: Optional[int] = None
    use_beam_search: bool = False
    top_k: Optional[int] = None
    min_p: Optional[float] = None
    repetition_penalty: Optional[float] = None
    length_penalty: float = 1.0
    stop_token_ids: Optional[list[int]] = Field(default_factory=list)
    include_stop_str_in_output: bool = False
    ignore_eos: bool = False
    min_tokens: int = 0
    skip_special_tokens: bool = True
    spaces_between_special_tokens: bool = True
    truncate_prompt_tokens: Optional[Annotated[int, Field(ge=1)]] = None
    prompt_logprobs: Optional[int] = None

The following extra parameters are supported:

    echo: bool = Field(
        default=False,
        description=(
            "If true, the new message will be prepended with the last message "
            "if they belong to the same role."),
    )
    add_generation_prompt: bool = Field(
        default=True,
        description=
        ("If true, the generation prompt will be added to the chat template. "
         "This is a parameter used by chat template in tokenizer config of the "
         "model."),
    )
    continue_final_message: bool = Field(
        default=False,
        description=
        ("If this is set, the chat will be formatted so that the final "
         "message in the chat is open-ended, without any EOS tokens. The "
         "model will continue this message rather than starting a new one. "
         "This allows you to \"prefill\" part of the model's response for it. "
         "Cannot be used at the same time as `add_generation_prompt`."),
    )
    add_special_tokens: bool = Field(
        default=False,
        description=(
            "If true, special tokens (e.g. BOS) will be added to the prompt "
            "on top of what is added by the chat template. "
            "For most models, the chat template takes care of adding the "
            "special tokens so this should be set to false (as is the "
            "default)."),
    )
    documents: Optional[list[dict[str, str]]] = Field(
        default=None,
        description=
        ("A list of dicts representing documents that will be accessible to "
         "the model if it is performing RAG (retrieval-augmented generation)."
         " If the template does not support RAG, this argument will have no "
         "effect. We recommend that each document should be a dict containing "
         "\"title\" and \"text\" keys."),
    )
    chat_template: Optional[str] = Field(
        default=None,
        description=(
            "A Jinja template to use for this conversion. "
            "As of transformers v4.44, default chat template is no longer "
            "allowed, so you must provide a chat template if the tokenizer "
            "does not define one."),
    )
    chat_template_kwargs: Optional[dict[str, Any]] = Field(
        default=None,
        description=("Additional kwargs to pass to the template renderer. "
                     "Will be accessible by the chat template."),
    )
    mm_processor_kwargs: Optional[dict[str, Any]] = Field(
        default=None,
        description=("Additional kwargs to pass to the HF processor."),
    )
    guided_json: Optional[Union[str, dict, BaseModel]] = Field(
        default=None,
        description=("If specified, the output will follow the JSON schema."),
    )
    guided_regex: Optional[str] = Field(
        default=None,
        description=(
            "If specified, the output will follow the regex pattern."),
    )
    guided_choice: Optional[list[str]] = Field(
        default=None,
        description=(
            "If specified, the output will be exactly one of the choices."),
    )
    guided_grammar: Optional[str] = Field(
        default=None,
        description=(
            "If specified, the output will follow the context free grammar."),
    )
    structural_tag: Optional[str] = Field(
        default=None,
        description=(
            "If specified, the output will follow the structural tag schema."),
    )
    guided_decoding_backend: Optional[str] = Field(
        default=None,
        description=(
            "If specified, will override the default guided decoding backend "
            "of the server for this specific request. If set, must be either "
            "'outlines' / 'lm-format-enforcer'"),
    )
    guided_whitespace_pattern: Optional[str] = Field(
        default=None,
        description=(
            "If specified, will override the default whitespace pattern "
            "for guided json decoding."),
    )
    priority: int = Field(
        default=0,
        description=(
            "The priority of the request (lower means earlier handling; "
            "default: 0). Any priority other than 0 will raise an error "
            "if the served model does not use priority scheduling."),
    )
    request_id: str = Field(
        default_factory=lambda: f"{random_uuid()}",
        description=(
            "The request_id related to this request. If the caller does "
            "not set it, a random_uuid will be generated. This id is used "
            "through out the inference process and return in response."),
    )
    logits_processors: Optional[LogitsProcessors] = Field(
        default=None,
        description=(
            "A list of either qualified names of logits processors, or "
            "constructor objects, to apply when sampling. A constructor is "
            "a JSON object with a required 'qualname' field specifying the "
            "qualified name of the processor class/factory, and optional "
            "'args' and 'kwargs' fields containing positional and keyword "
            "arguments. For example: {'qualname': "
            "'my_module.MyLogitsProcessor', 'args': [1, 2], 'kwargs': "
            "{'param': 'value'}}."))
    return_tokens_as_token_ids: Optional[bool] = Field(
        default=None,
        description=(
            "If specified with 'logprobs', tokens are represented "
            " as strings of the form 'token_id:{token_id}' so that tokens "
            "that are not JSON-encodable can be identified."))

Embeddings API#

Our Embeddings API is compatible with OpenAI’s Embeddings API; you can use the official OpenAI Python client to interact with it.

If the model has a chat template, you can replace inputs with a list of messages (same schema as Chat API) which will be treated as a single prompt to the model.

Code example: examples/online_serving/openai_embedding_client.py

Multi-modal inputs#

You can pass multi-modal inputs to embedding models by defining a custom chat template for the server and passing a list of messages in the request. Refer to the examples below for illustration.

To serve the model:

vllm serve TIGER-Lab/VLM2Vec-Full --task embed \
  --trust-remote-code --max-model-len 4096 --chat-template examples/template_vlm2vec.jinja

Important

Since VLM2Vec has the same model architecture as Phi-3.5-Vision, we have to explicitly pass --task embed to run this model in embedding mode instead of text generation mode.

The custom chat template is completely different from the original one for this model, and can be found here: examples/template_vlm2vec.jinja

Since the request schema is not defined by OpenAI client, we post a request to the server using the lower-level requests library:

import requests

image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"

response = requests.post(
    "http://localhost:8000/v1/embeddings",
    json={
        "model": "TIGER-Lab/VLM2Vec-Full",
        "messages": [{
            "role": "user",
            "content": [
                {"type": "image_url", "image_url": {"url": image_url}},
                {"type": "text", "text": "Represent the given image."},
            ],
        }],
        "encoding_format": "float",
    },
)
response.raise_for_status()
response_json = response.json()
print("Embedding output:", response_json["data"][0]["embedding"])

To serve the model:

vllm serve MrLight/dse-qwen2-2b-mrl-v1 --task embed \
  --trust-remote-code --max-model-len 8192 --chat-template examples/template_dse_qwen2_vl.jinja

Important

Like with VLM2Vec, we have to explicitly pass --task embed.

Additionally, MrLight/dse-qwen2-2b-mrl-v1 requires an EOS token for embeddings, which is handled by a custom chat template: examples/template_dse_qwen2_vl.jinja

Important

MrLight/dse-qwen2-2b-mrl-v1 requires a placeholder image of the minimum image size for text query embeddings. See the full code example below for details.

Full example: examples/online_serving/openai_chat_embedding_client_for_multimodal.py

Extra parameters#

The following pooling parameters are supported.

    additional_data: Optional[Any] = None

The following extra parameters are supported by default:

    add_special_tokens: bool = Field(
        default=True,
        description=(
            "If true (the default), special tokens (e.g. BOS) will be added to "
            "the prompt."),
    )
    priority: int = Field(
        default=0,
        description=(
            "The priority of the request (lower means earlier handling; "
            "default: 0). Any priority other than 0 will raise an error "
            "if the served model does not use priority scheduling."),
    )

For chat-like input (i.e. if messages is passed), these extra parameters are supported instead:

    add_special_tokens: bool = Field(
        default=False,
        description=(
            "If true, special tokens (e.g. BOS) will be added to the prompt "
            "on top of what is added by the chat template. "
            "For most models, the chat template takes care of adding the "
            "special tokens so this should be set to false (as is the "
            "default)."),
    )
    chat_template: Optional[str] = Field(
        default=None,
        description=(
            "A Jinja template to use for this conversion. "
            "As of transformers v4.44, default chat template is no longer "
            "allowed, so you must provide a chat template if the tokenizer "
            "does not define one."),
    )
    chat_template_kwargs: Optional[dict[str, Any]] = Field(
        default=None,
        description=("Additional kwargs to pass to the template renderer. "
                     "Will be accessible by the chat template."),
    )
    mm_processor_kwargs: Optional[dict[str, Any]] = Field(
        default=None,
        description=("Additional kwargs to pass to the HF processor."),
    )
    priority: int = Field(
        default=0,
        description=(
            "The priority of the request (lower means earlier handling; "
            "default: 0). Any priority other than 0 will raise an error "
            "if the served model does not use priority scheduling."),
    )

Transcriptions API#

Our Transcriptions API is compatible with OpenAI’s Transcriptions API; you can use the official OpenAI Python client to interact with it.

Note

To use the Transcriptions API, please install with extra audio dependencies using pip install vllm[audio].

Code example: examples/online_serving/openai_transcription_client.py

Extra Parameters#

The following sampling parameters are supported.

    temperature: float = Field(default=0.0)
    """The sampling temperature, between 0 and 1.

    Higher values like 0.8 will make the output more random, while lower values
    like 0.2 will make it more focused / deterministic. If set to 0, the model
    will use [log probability](https://en.wikipedia.org/wiki/Log_probability)
    to automatically increase the temperature until certain thresholds are hit.
    """

    top_p: Optional[float] = None
    """Enables nucleus (top-p) sampling, where tokens are selected from the 
    smallest possible set whose cumulative probability exceeds `p`.
    """

    top_k: Optional[int] = None
    """Limits sampling to the `k` most probable tokens at each step."""

    min_p: Optional[float] = None
    """Filters out tokens with a probability lower than `min_p`, ensuring a 
    minimum likelihood threshold during sampling.
    """

    seed: Optional[int] = Field(None, ge=_LONG_INFO.min, le=_LONG_INFO.max)
    """The seed to use for sampling."""

    frequency_penalty: Optional[float] = 0.0
    """The frequency penalty to use for sampling."""

    repetition_penalty: Optional[float] = None
    """The repetition penalty to use for sampling."""

    presence_penalty: Optional[float] = 0.0
    """The presence penalty to use for sampling."""

The following extra parameters are supported:

    stream: Optional[bool] = False
    """Custom field not present in the original OpenAI definition. When set, 
    it will enable output to be streamed in a similar fashion as the Chat
    Completion endpoint. 
    """
    # Flattened stream option to simplify form data.
    stream_include_usage: Optional[bool] = False
    stream_continuous_usage_stats: Optional[bool] = False

Tokenizer API#

Our Tokenizer API is a simple wrapper over HuggingFace-style tokenizers. It consists of two endpoints:

  • /tokenize corresponds to calling tokenizer.encode().

  • /detokenize corresponds to calling tokenizer.decode().

Pooling API#

Our Pooling API encodes input prompts using a pooling model and returns the corresponding hidden states.

The input format is the same as Embeddings API, but the output data can contain an arbitrary nested list, not just a 1-D list of floats.

Code example: examples/online_serving/openai_pooling_client.py

Score API#

Our Score API can apply a cross-encoder model or an embedding model to predict scores for sentence pairs. When using an embedding model the score corresponds to the cosine similarity between each embedding pair. Usually, the score for a sentence pair refers to the similarity between two sentences, on a scale of 0 to 1.

You can find the documentation for cross encoder models at sbert.net.

Code example: examples/online_serving/openai_cross_encoder_score.py

Single inference#

You can pass a string to both text_1 and text_2, forming a single sentence pair.

Request:

curl -X 'POST' \
  'http://127.0.0.1:8000/score' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
  "model": "BAAI/bge-reranker-v2-m3",
  "encoding_format": "float",
  "text_1": "What is the capital of France?",
  "text_2": "The capital of France is Paris."
}'

Response:

{
  "id": "score-request-id",
  "object": "list",
  "created": 693447,
  "model": "BAAI/bge-reranker-v2-m3",
  "data": [
    {
      "index": 0,
      "object": "score",
      "score": 1
    }
  ],
  "usage": {}
}

Batch inference#

You can pass a string to text_1 and a list to text_2, forming multiple sentence pairs where each pair is built from text_1 and a string in text_2. The total number of pairs is len(text_2).

Request:

curl -X 'POST' \
  'http://127.0.0.1:8000/score' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
  "model": "BAAI/bge-reranker-v2-m3",
  "text_1": "What is the capital of France?",
  "text_2": [
    "The capital of Brazil is Brasilia.",
    "The capital of France is Paris."
  ]
}'

Response:

{
  "id": "score-request-id",
  "object": "list",
  "created": 693570,
  "model": "BAAI/bge-reranker-v2-m3",
  "data": [
    {
      "index": 0,
      "object": "score",
      "score": 0.001094818115234375
    },
    {
      "index": 1,
      "object": "score",
      "score": 1
    }
  ],
  "usage": {}
}

You can pass a list to both text_1 and text_2, forming multiple sentence pairs where each pair is built from a string in text_1 and the corresponding string in text_2 (similar to zip()). The total number of pairs is len(text_2).

Request:

curl -X 'POST' \
  'http://127.0.0.1:8000/score' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
  "model": "BAAI/bge-reranker-v2-m3",
  "encoding_format": "float",
  "text_1": [
    "What is the capital of Brazil?",
    "What is the capital of France?"
  ],
  "text_2": [
    "The capital of Brazil is Brasilia.",
    "The capital of France is Paris."
  ]
}'

Response:

{
  "id": "score-request-id",
  "object": "list",
  "created": 693447,
  "model": "BAAI/bge-reranker-v2-m3",
  "data": [
    {
      "index": 0,
      "object": "score",
      "score": 1
    },
    {
      "index": 1,
      "object": "score",
      "score": 1
    }
  ],
  "usage": {}
}

Extra parameters#

The following pooling parameters are supported.

    additional_data: Optional[Any] = None

The following extra parameters are supported:

    priority: int = Field(
        default=0,
        description=(
            "The priority of the request (lower means earlier handling; "
            "default: 0). Any priority other than 0 will raise an error "
            "if the served model does not use priority scheduling."),
    )

Re-rank API#

Our Re-rank API can apply an embedding model or a cross-encoder model to predict relevant scores between a single query, and each of a list of documents. Usually, the score for a sentence pair refers to the similarity between two sentences, on a scale of 0 to 1.

You can find the documentation for cross encoder models at sbert.net.

The rerank endpoints support popular re-rank models such as BAAI/bge-reranker-base and other models supporting the score task. Additionally, /rerank, /v1/rerank, and /v2/rerank endpoints are compatible with both Jina AI’s re-rank API interface and Cohere’s re-rank API interface to ensure compatibility with popular open-source tools.

Code example: examples/online_serving/jinaai_rerank_client.py

Example Request#

Note that the top_n request parameter is optional and will default to the length of the documents field. Result documents will be sorted by relevance, and the index property can be used to determine original order.

Request:

curl -X 'POST' \
  'http://127.0.0.1:8000/v1/rerank' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
  "model": "BAAI/bge-reranker-base",
  "query": "What is the capital of France?",
  "documents": [
    "The capital of Brazil is Brasilia.",
    "The capital of France is Paris.",
    "Horses and cows are both animals"
  ]
}'

Response:

{
  "id": "rerank-fae51b2b664d4ed38f5969b612edff77",
  "model": "BAAI/bge-reranker-base",
  "usage": {
    "total_tokens": 56
  },
  "results": [
    {
      "index": 1,
      "document": {
        "text": "The capital of France is Paris."
      },
      "relevance_score": 0.99853515625
    },
    {
      "index": 0,
      "document": {
        "text": "The capital of Brazil is Brasilia."
      },
      "relevance_score": 0.0005860328674316406
    }
  ]
}

Extra parameters#

The following pooling parameters are supported.

    additional_data: Optional[Any] = None

The following extra parameters are supported:

    priority: int = Field(
        default=0,
        description=(
            "The priority of the request (lower means earlier handling; "
            "default: 0). Any priority other than 0 will raise an error "
            "if the served model does not use priority scheduling."),
    )