Engine Arguments#
Engine arguments control the behavior of the vLLM engine.
For offline inference, they are part of the arguments to
LLM
class.For online serving, they are part of the arguments to
vllm serve
.
Below, you can find an explanation of every engine argument:
usage: vllm serve [-h] [--model MODEL]
[--task {auto,generate,embedding,embed,classify,score,reward,transcription}]
[--tokenizer TOKENIZER] [--hf-config-path HF_CONFIG_PATH]
[--skip-tokenizer-init] [--revision REVISION]
[--code-revision CODE_REVISION]
[--tokenizer-revision TOKENIZER_REVISION]
[--tokenizer-mode {auto,slow,mistral,custom}]
[--trust-remote-code]
[--allowed-local-media-path ALLOWED_LOCAL_MEDIA_PATH]
[--download-dir DOWNLOAD_DIR]
[--load-format {auto,pt,safetensors,npcache,dummy,tensorizer,sharded_state,gguf,bitsandbytes,mistral,runai_streamer,fastsafetensors}]
[--config-format {auto,hf,mistral}]
[--dtype {auto,half,float16,bfloat16,float,float32}]
[--kv-cache-dtype {auto,fp8,fp8_e5m2,fp8_e4m3}]
[--max-model-len MAX_MODEL_LEN]
[--guided-decoding-backend GUIDED_DECODING_BACKEND]
[--logits-processor-pattern LOGITS_PROCESSOR_PATTERN]
[--model-impl {auto,vllm,transformers}]
[--distributed-executor-backend {ray,mp,uni,external_launcher}]
[--pipeline-parallel-size PIPELINE_PARALLEL_SIZE]
[--tensor-parallel-size TENSOR_PARALLEL_SIZE]
[--data-parallel-size DATA_PARALLEL_SIZE]
[--enable-expert-parallel]
[--max-parallel-loading-workers MAX_PARALLEL_LOADING_WORKERS]
[--ray-workers-use-nsight] [--block-size {8,16,32,64,128}]
[--enable-prefix-caching | --no-enable-prefix-caching]
[--prefix-caching-hash-algo {builtin,sha256}]
[--disable-sliding-window] [--use-v2-block-manager]
[--num-lookahead-slots NUM_LOOKAHEAD_SLOTS] [--seed SEED]
[--swap-space SWAP_SPACE] [--cpu-offload-gb CPU_OFFLOAD_GB]
[--gpu-memory-utilization GPU_MEMORY_UTILIZATION]
[--num-gpu-blocks-override NUM_GPU_BLOCKS_OVERRIDE]
[--max-num-batched-tokens MAX_NUM_BATCHED_TOKENS]
[--max-num-partial-prefills MAX_NUM_PARTIAL_PREFILLS]
[--max-long-partial-prefills MAX_LONG_PARTIAL_PREFILLS]
[--long-prefill-token-threshold LONG_PREFILL_TOKEN_THRESHOLD]
[--max-num-seqs MAX_NUM_SEQS] [--max-logprobs MAX_LOGPROBS]
[--disable-log-stats]
[--quantization {aqlm,awq,deepspeedfp,tpu_int8,fp8,ptpc_fp8,fbgemm_fp8,modelopt,nvfp4,marlin,gguf,gptq_marlin_24,gptq_marlin,awq_marlin,gptq,compressed-tensors,bitsandbytes,qqq,hqq,experts_int8,neuron_quant,ipex,quark,moe_wna16,None}]
[--rope-scaling ROPE_SCALING] [--rope-theta ROPE_THETA]
[--hf-overrides HF_OVERRIDES] [--enforce-eager]
[--max-seq-len-to-capture MAX_SEQ_LEN_TO_CAPTURE]
[--disable-custom-all-reduce]
[--tokenizer-pool-size TOKENIZER_POOL_SIZE]
[--tokenizer-pool-type TOKENIZER_POOL_TYPE]
[--tokenizer-pool-extra-config TOKENIZER_POOL_EXTRA_CONFIG]
[--limit-mm-per-prompt LIMIT_MM_PER_PROMPT]
[--mm-processor-kwargs MM_PROCESSOR_KWARGS]
[--disable-mm-preprocessor-cache] [--enable-lora]
[--enable-lora-bias] [--max-loras MAX_LORAS]
[--max-lora-rank MAX_LORA_RANK]
[--lora-extra-vocab-size LORA_EXTRA_VOCAB_SIZE]
[--lora-dtype {auto,float16,bfloat16}]
[--long-lora-scaling-factors LONG_LORA_SCALING_FACTORS]
[--max-cpu-loras MAX_CPU_LORAS] [--fully-sharded-loras]
[--enable-prompt-adapter]
[--max-prompt-adapters MAX_PROMPT_ADAPTERS]
[--max-prompt-adapter-token MAX_PROMPT_ADAPTER_TOKEN]
[--device {auto,cuda,neuron,cpu,tpu,xpu,hpu}]
[--num-scheduler-steps NUM_SCHEDULER_STEPS]
[--use-tqdm-on-load | --no-use-tqdm-on-load]
[--multi-step-stream-outputs [MULTI_STEP_STREAM_OUTPUTS]]
[--scheduler-delay-factor SCHEDULER_DELAY_FACTOR]
[--enable-chunked-prefill [ENABLE_CHUNKED_PREFILL]]
[--speculative-config SPECULATIVE_CONFIG]
[--model-loader-extra-config MODEL_LOADER_EXTRA_CONFIG]
[--ignore-patterns IGNORE_PATTERNS]
[--preemption-mode PREEMPTION_MODE]
[--served-model-name SERVED_MODEL_NAME [SERVED_MODEL_NAME ...]]
[--qlora-adapter-name-or-path QLORA_ADAPTER_NAME_OR_PATH]
[--show-hidden-metrics-for-version SHOW_HIDDEN_METRICS_FOR_VERSION]
[--otlp-traces-endpoint OTLP_TRACES_ENDPOINT]
[--collect-detailed-traces COLLECT_DETAILED_TRACES]
[--disable-async-output-proc]
[--scheduling-policy {fcfs,priority}]
[--scheduler-cls SCHEDULER_CLS]
[--override-neuron-config OVERRIDE_NEURON_CONFIG]
[--override-pooler-config OVERRIDE_POOLER_CONFIG]
[--compilation-config COMPILATION_CONFIG]
[--kv-transfer-config KV_TRANSFER_CONFIG]
[--worker-cls WORKER_CLS]
[--worker-extension-cls WORKER_EXTENSION_CLS]
[--generation-config GENERATION_CONFIG]
[--override-generation-config OVERRIDE_GENERATION_CONFIG]
[--enable-sleep-mode] [--calculate-kv-scales]
[--additional-config ADDITIONAL_CONFIG] [--enable-reasoning]
[--reasoning-parser {deepseek_r1,granite}]
[--disable-cascade-attn]
Named Arguments#
- --model
Name or path of the huggingface model to use.
Default: “facebook/opt-125m”
- --task
Possible choices: auto, generate, embedding, embed, classify, score, reward, transcription
The task to use the model for. Each vLLM instance only supports one task, even if the same model can be used for multiple tasks. When the model only supports one task,
"auto"
can be used to select it; otherwise, you must specify explicitly which task to use.Default: “auto”
- --tokenizer
Name or path of the huggingface tokenizer to use. If unspecified, model name or path will be used.
- --hf-config-path
Name or path of the huggingface config to use. If unspecified, model name or path will be used.
- --skip-tokenizer-init
Skip initialization of tokenizer and detokenizer. Expects valid prompt_token_ids and None for prompt from the input. The generated output will contain token ids.
- --revision
The specific model version to use. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.
- --code-revision
The specific revision to use for the model code on Hugging Face Hub. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.
- --tokenizer-revision
Revision of the huggingface tokenizer to use. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.
- --tokenizer-mode
Possible choices: auto, slow, mistral, custom
The tokenizer mode.
“auto” will use the fast tokenizer if available.
“slow” will always use the slow tokenizer.
“mistral” will always use the mistral_common tokenizer.
“custom” will use –tokenizer to select the preregistered tokenizer.
Default: “auto”
- --trust-remote-code
Trust remote code from huggingface.
- --allowed-local-media-path
Allowing API requests to read local images or videos from directories specified by the server file system. This is a security risk. Should only be enabled in trusted environments.
- --download-dir
Directory to download and load the weights.
- --load-format
Possible choices: auto, pt, safetensors, npcache, dummy, tensorizer, sharded_state, gguf, bitsandbytes, mistral, runai_streamer, fastsafetensors
The format of the model weights to load.
“auto” will try to load the weights in the safetensors format and fall back to the pytorch bin format if safetensors format is not available.
“pt” will load the weights in the pytorch bin format.
“safetensors” will load the weights in the safetensors format.
“npcache” will load the weights in pytorch format and store a numpy cache to speed up the loading.
“dummy” will initialize the weights with random values, which is mainly for profiling.
“tensorizer” will load the weights using tensorizer from CoreWeave. See the Tensorize vLLM Model script in the Examples section for more information.
“runai_streamer” will load the Safetensors weights using Run:aiModel Streamer.
“bitsandbytes” will load the weights using bitsandbytes quantization.
“sharded_state” will load weights from pre-sharded checkpoint files, supporting efficient loading of tensor-parallel models
“gguf” will load weights from GGUF format files (details specified in ggml-org/ggml).
“mistral” will load weights from consolidated safetensors files used by Mistral models.
Default: “auto”
- --config-format
Possible choices: auto, hf, mistral
The format of the model config to load.
“auto” will try to load the config in hf format if available else it will try to load in mistral format
Default: “ConfigFormat.AUTO”
- --dtype
Possible choices: auto, half, float16, bfloat16, float, float32
Data type for model weights and activations.
“auto” will use FP16 precision for FP32 and FP16 models, and BF16 precision for BF16 models.
“half” for FP16. Recommended for AWQ quantization.
“float16” is the same as “half”.
“bfloat16” for a balance between precision and range.
“float” is shorthand for FP32 precision.
“float32” for FP32 precision.
Default: “auto”
- --kv-cache-dtype
Possible choices: auto, fp8, fp8_e5m2, fp8_e4m3
Data type for kv cache storage. If “auto”, will use model data type. CUDA 11.8+ supports fp8 (=fp8_e4m3) and fp8_e5m2. ROCm (AMD GPU) supports fp8 (=fp8_e4m3)
Default: “auto”
- --max-model-len
Model context length. If unspecified, will be automatically derived from the model config.
- --guided-decoding-backend
Which engine will be used for guided decoding (JSON schema / regex etc) by default. Currently support mlc-ai/xgrammar and guidance-ai/llguidance.Valid backend values are “xgrammar”, “guidance”, and “auto”. With “auto”, we will make opinionated choices based on requestcontents and what the backend libraries currently support, so the behavior is subject to change in each release.
Default: “xgrammar”
- --logits-processor-pattern
Optional regex pattern specifying valid logits processor qualified names that can be passed with the logits_processors extra completion argument. Defaults to None, which allows no processors.
- --model-impl
Possible choices: auto, vllm, transformers
Which implementation of the model to use.
“auto” will try to use the vLLM implementation if it exists and fall back to the Transformers implementation if no vLLM implementation is available.
“vllm” will use the vLLM model implementation.
“transformers” will use the Transformers model implementation.
Default: “auto”
- --distributed-executor-backend
Possible choices: ray, mp, uni, external_launcher
Backend to use for distributed model workers, either “ray” or “mp” (multiprocessing). If the product of pipeline_parallel_size and tensor_parallel_size is less than or equal to the number of GPUs available, “mp” will be used to keep processing on a single host. Otherwise, this will default to “ray” if Ray is installed and fail otherwise. Note that tpu only supports Ray for distributed inference.
- --pipeline-parallel-size, -pp
Number of pipeline stages.
Default: 1
- --tensor-parallel-size, -tp
Number of tensor parallel replicas.
Default: 1
- --data-parallel-size, -dp
Number of data parallel replicas. MoE layers will be sharded according to the product of the tensor-parallel-size and data-parallel-size.
Default: 1
- --enable-expert-parallel
Use expert parallelism instead of tensor parallelism for MoE layers.
- --max-parallel-loading-workers
Load model sequentially in multiple batches, to avoid RAM OOM when using tensor parallel and large models.
- --ray-workers-use-nsight
If specified, use nsight to profile Ray workers.
- --block-size
Possible choices: 8, 16, 32, 64, 128
Token block size for contiguous chunks of tokens. This is ignored on neuron devices and set to
--max-model-len
. On CUDA devices, only block sizes up to 32 are supported. On HPU devices, block size defaults to 128.- --enable-prefix-caching, --no-enable-prefix-caching
Enables automatic prefix caching. Use
--no-enable-prefix-caching
to disable explicitly.- --prefix-caching-hash-algo
Possible choices: builtin, sha256
Set the hash algorithm for prefix caching. Options are ‘builtin’ (Python’s built-in hash) or ‘sha256’ (collision resistant but with certain overheads).
Default: “builtin”
- --disable-sliding-window
Disables sliding window, capping to sliding window size.
- --use-v2-block-manager
[DEPRECATED] block manager v1 has been removed and SelfAttnBlockSpaceManager (i.e. block manager v2) is now the default. Setting this flag to True or False has no effect on vLLM behavior.
- --num-lookahead-slots
Experimental scheduling config necessary for speculative decoding. This will be replaced by speculative config in the future; it is present to enable correctness tests until then.
Default: 0
- --seed
Random seed for operations.
- --swap-space
CPU swap space size (GiB) per GPU.
Default: 4
- --cpu-offload-gb
The space in GiB to offload to CPU, per GPU. Default is 0, which means no offloading. Intuitively, this argument can be seen as a virtual way to increase the GPU memory size. For example, if you have one 24 GB GPU and set this to 10, virtually you can think of it as a 34 GB GPU. Then you can load a 13B model with BF16 weight, which requires at least 26GB GPU memory. Note that this requires fast CPU-GPU interconnect, as part of the model is loaded from CPU memory to GPU memory on the fly in each model forward pass.
Default: 0
- --gpu-memory-utilization
The fraction of GPU memory to be used for the model executor, which can range from 0 to 1. For example, a value of 0.5 would imply 50% GPU memory utilization. If unspecified, will use the default value of 0.9. This is a per-instance limit, and only applies to the current vLLM instance.It does not matter if you have another vLLM instance running on the same GPU. For example, if you have two vLLM instances running on the same GPU, you can set the GPU memory utilization to 0.5 for each instance.
Default: 0.9
- --num-gpu-blocks-override
If specified, ignore GPU profiling result and use this number of GPU blocks. Used for testing preemption.
- --max-num-batched-tokens
Maximum number of batched tokens per iteration.
- --max-num-partial-prefills
For chunked prefill, the max number of concurrent partial prefills.
Default: 1
- --max-long-partial-prefills
For chunked prefill, the maximum number of prompts longer than –long-prefill-token-threshold that will be prefilled concurrently. Setting this less than –max-num-partial-prefills will allow shorter prompts to jump the queue in front of longer prompts in some cases, improving latency.
Default: 1
- --long-prefill-token-threshold
For chunked prefill, a request is considered long if the prompt is longer than this number of tokens.
Default: 0
- --max-num-seqs
Maximum number of sequences per iteration.
- --max-logprobs
Max number of log probs to return logprobs is specified in SamplingParams.
Default: 20
- --disable-log-stats
Disable logging statistics.
- --quantization, -q
Possible choices: aqlm, awq, deepspeedfp, tpu_int8, fp8, ptpc_fp8, fbgemm_fp8, modelopt, nvfp4, marlin, gguf, gptq_marlin_24, gptq_marlin, awq_marlin, gptq, compressed-tensors, bitsandbytes, qqq, hqq, experts_int8, neuron_quant, ipex, quark, moe_wna16, None
Method used to quantize the weights. If None, we first check the quantization_config attribute in the model config file. If that is None, we assume the model weights are not quantized and use dtype to determine the data type of the weights.
- --rope-scaling
RoPE scaling configuration in JSON format. For example,
{"rope_type":"dynamic","factor":2.0}
- --rope-theta
RoPE theta. Use with rope_scaling. In some cases, changing the RoPE theta improves the performance of the scaled model.
- --hf-overrides
Extra arguments for the HuggingFace config. This should be a JSON string that will be parsed into a dictionary.
- --enforce-eager
Always use eager-mode PyTorch. If False, will use eager mode and CUDA graph in hybrid for maximal performance and flexibility.
- --max-seq-len-to-capture
Maximum sequence length covered by CUDA graphs. When a sequence has context length larger than this, we fall back to eager mode. Additionally for encoder-decoder models, if the sequence length of the encoder input is larger than this, we fall back to the eager mode.
Default: 8192
- --disable-custom-all-reduce
See ParallelConfig.
- --tokenizer-pool-size
Size of tokenizer pool to use for asynchronous tokenization. If 0, will use synchronous tokenization.
Default: 0
- --tokenizer-pool-type
Type of tokenizer pool to use for asynchronous tokenization. Ignored if tokenizer_pool_size is 0.
Default: “ray”
- --tokenizer-pool-extra-config
Extra config for tokenizer pool. This should be a JSON string that will be parsed into a dictionary. Ignored if tokenizer_pool_size is 0.
- --limit-mm-per-prompt
For each multimodal plugin, limit how many input instances to allow for each prompt. Expects a comma-separated list of items, e.g.: image=16,video=2 allows a maximum of 16 images and 2 videos per prompt. Defaults to 1 for each modality.
- --mm-processor-kwargs
Overrides for the multimodal input mapping/processing, e.g., image processor. For example:
{"num_crops": 4}
.- --disable-mm-preprocessor-cache
If true, then disables caching of the multi-modal preprocessor/mapper. (not recommended)
- --enable-lora
If True, enable handling of LoRA adapters.
- --enable-lora-bias
If True, enable bias for LoRA adapters.
- --max-loras
Max number of LoRAs in a single batch.
Default: 1
- --max-lora-rank
Max LoRA rank.
Default: 16
- --lora-extra-vocab-size
Maximum size of extra vocabulary that can be present in a LoRA adapter (added to the base model vocabulary).
Default: 256
- --lora-dtype
Possible choices: auto, float16, bfloat16
Data type for LoRA. If auto, will default to base model dtype.
Default: “auto”
- --long-lora-scaling-factors
Specify multiple scaling factors (which can be different from base model scaling factor - see eg. Long LoRA) to allow for multiple LoRA adapters trained with those scaling factors to be used at the same time. If not specified, only adapters trained with the base model scaling factor are allowed.
- --max-cpu-loras
Maximum number of LoRAs to store in CPU memory. Must be >= than max_loras.
- --fully-sharded-loras
By default, only half of the LoRA computation is sharded with tensor parallelism. Enabling this will use the fully sharded layers. At high sequence length, max rank or tensor parallel size, this is likely faster.
- --enable-prompt-adapter
If True, enable handling of PromptAdapters.
- --max-prompt-adapters
Max number of PromptAdapters in a batch.
Default: 1
- --max-prompt-adapter-token
Max number of PromptAdapters tokens
Default: 0
- --device
Possible choices: auto, cuda, neuron, cpu, tpu, xpu, hpu
Device type for vLLM execution.
Default: “auto”
- --num-scheduler-steps
Maximum number of forward steps per scheduler call.
Default: 1
- --use-tqdm-on-load, --no-use-tqdm-on-load
Whether to enable/disable progress bar when loading model weights.
Default: True
- --multi-step-stream-outputs
If False, then multi-step will stream outputs at the end of all steps
Default: True
- --scheduler-delay-factor
Apply a delay (of delay factor multiplied by previous prompt latency) before scheduling next prompt.
Default: 0.0
- --enable-chunked-prefill
If set, the prefill requests can be chunked based on the max_num_batched_tokens.
- --speculative-config
The configurations for speculative decoding. Should be a JSON string.
- --model-loader-extra-config
Extra config for model loader. This will be passed to the model loader corresponding to the chosen load_format. This should be a JSON string that will be parsed into a dictionary.
- --ignore-patterns
The pattern(s) to ignore when loading the model.Default to original/**/* to avoid repeated loading of llama’s checkpoints.
Default: []
- --preemption-mode
If ‘recompute’, the engine performs preemption by recomputing; If ‘swap’, the engine performs preemption by block swapping.
- --served-model-name
The model name(s) used in the API. If multiple names are provided, the server will respond to any of the provided names. The model name in the model field of a response will be the first name in this list. If not specified, the model name will be the same as the
--model
argument. Noted that this name(s) will also be used in model_name tag content of prometheus metrics, if multiple names provided, metrics tag will take the first one.- --qlora-adapter-name-or-path
Name or path of the QLoRA adapter.
- --show-hidden-metrics-for-version
Enable deprecated Prometheus metrics that have been hidden since the specified version. For example, if a previously deprecated metric has been hidden since the v0.7.0 release, you use –show-hidden-metrics-for-version=0.7 as a temporary escape hatch while you migrate to new metrics. The metric is likely to be removed completely in an upcoming release.
- --otlp-traces-endpoint
Target URL to which OpenTelemetry traces will be sent.
- --collect-detailed-traces
Valid choices are model,worker,all. It makes sense to set this only if
--otlp-traces-endpoint
is set. If set, it will collect detailed traces for the specified modules. This involves use of possibly costly and or blocking operations and hence might have a performance impact.- --disable-async-output-proc
Disable async output processing. This may result in lower performance.
- --scheduling-policy
Possible choices: fcfs, priority
The scheduling policy to use. “fcfs” (first come first served, i.e. requests are handled in order of arrival; default) or “priority” (requests are handled based on given priority (lower value means earlier handling) and time of arrival deciding any ties).
Default: “fcfs”
- --scheduler-cls
The scheduler class to use. “vllm.core.scheduler.Scheduler” is the default scheduler. Can be a class directly or the path to a class of form “mod.custom_class”.
Default: “vllm.core.scheduler.Scheduler”
- --override-neuron-config
Override or set neuron device configuration. e.g.
{"cast_logits_dtype": "bloat16"}
.- --override-pooler-config
Override or set the pooling method for pooling models. e.g.
{"pooling_type": "mean", "normalize": false}
.- --compilation-config, -O
torch.compile configuration for the model.When it is a number (0, 1, 2, 3), it will be interpreted as the optimization level. NOTE: level 0 is the default level without any optimization. level 1 and 2 are for internal testing only. level 3 is the recommended level for production. To specify the full compilation config, use a JSON string. Following the convention of traditional compilers, using -O without space is also supported. -O3 is equivalent to -O 3.
- --kv-transfer-config
The configurations for distributed KV cache transfer. Should be a JSON string.
- --worker-cls
The worker class to use for distributed execution.
Default: “auto”
- --worker-extension-cls
The worker extension class on top of the worker cls, it is useful if you just want to add new functions to the worker class without changing the existing functions.
Default: “”
- --generation-config
The folder path to the generation config. Defaults to ‘auto’, the generation config will be loaded from model path. If set to ‘vllm’, no generation config is loaded, vLLM defaults will be used. If set to a folder path, the generation config will be loaded from the specified folder path. If max_new_tokens is specified in generation config, then it sets a server-wide limit on the number of output tokens for all requests.
Default: auto
- --override-generation-config
Overrides or sets generation config in JSON format. e.g.
{"temperature": 0.5}
. If used with –generation-config=auto, the override parameters will be merged with the default config from the model. If generation-config is None, only the override parameters are used.- --enable-sleep-mode
Enable sleep mode for the engine. (only cuda platform is supported)
- --calculate-kv-scales
This enables dynamic calculation of k_scale and v_scale when kv-cache-dtype is fp8. If calculate-kv-scales is false, the scales will be loaded from the model checkpoint if available. Otherwise, the scales will default to 1.0.
- --additional-config
Additional config for specified platform in JSON format. Different platforms may support different configs. Make sure the configs are valid for the platform you are using. The input format is like ‘{“config_key”:”config_value”}’
- --enable-reasoning
Whether to enable reasoning_content for the model. If enabled, the model will be able to generate reasoning content.
- --reasoning-parser
Possible choices: deepseek_r1, granite
Select the reasoning parser depending on the model that you’re using. This is used to parse the reasoning content into OpenAI API format. Required for
--enable-reasoning
.- --disable-cascade-attn
Disable cascade attention for V1. While cascade attention does not change the mathematical correctness, disabling it could be useful for preventing potential numerical issues. Note that even if this is set to False, cascade attention will be only used when the heuristic tells that it’s beneficial.
Async Engine Arguments#
Additional arguments are available to the asynchronous engine which is used for online serving:
usage: vllm serve [-h] [--disable-log-requests]
Named Arguments#
- --disable-log-requests
Disable logging requests.