Multi-Modality#

vLLM provides experimental support for multi-modal models through the vllm.multimodal package.

Multi-modal inputs can be passed alongside text and token prompts to supported models via the multi_modal_data field in vllm.inputs.PromptType.

Looking to add your own multi-modal model? Please follow the instructions listed here.

Module Contents#

vllm.multimodal.MULTIMODAL_REGISTRY = <vllm.multimodal.registry.MultiModalRegistry object>[source]#

The global MultiModalRegistry is used by model runners to dispatch data processing according to the target model.

Submodules#