LLM Inputs#

vllm.inputs.PromptType[source]#

alias of Union[str, TextPrompt, TokensPrompt, ExplicitEncoderDecoderPrompt]

class vllm.inputs.TextPrompt[source][source]#

Bases: TypedDict

Schema for a text prompt.

prompt: str[source]#

The input text to be tokenized before passing to the model.

multi_modal_data: NotRequired[MultiModalDataDict][source]#

Optional multi-modal data to pass to the model, if the model supports it.

mm_processor_kwargs: NotRequired[Dict[str, Any]][source]#

Optional multi-modal processor kwargs to be forwarded to the multimodal input mapper & processor. Note that if multiple modalities have registered mappers etc for the model being considered, we attempt to pass the mm_processor_kwargs to each of them.

class vllm.inputs.TokensPrompt[source][source]#

Bases: TypedDict

Schema for a tokenized prompt.

prompt_token_ids: List[int][source]#

A list of token IDs to pass to the model.

token_type_ids: NotRequired[List[int]][source]#

A list of token type IDs to pass to the cross encoder model.

multi_modal_data: NotRequired[MultiModalDataDict][source]#

Optional multi-modal data to pass to the model, if the model supports it.

mm_processor_kwargs: NotRequired[Dict[str, Any]][source]#

Optional multi-modal processor kwargs to be forwarded to the multimodal input mapper & processor. Note that if multiple modalities have registered mappers etc for the model being considered, we attempt to pass the mm_processor_kwargs to each of them.