Data Parsing#
Module Contents#
- class vllm.multimodal.parse.ModalityDataItems(data: _T, modality: str)[source]#
Represents data items for a modality in
MultiModalDataItems
.
- class vllm.multimodal.parse.ProcessorBatchItems(data: _T, modality: str)[source]#
Base class for data items that are arranged in a list.
- class vllm.multimodal.parse.EmbeddingItems(data: _T, modality: str)[source]#
Base class for data items that are expressed as a batched embedding tensor, or a list of embedding tensors (one per item).
- get(index: int) torch.Tensor [source]#
Get a data item by its index.
- class vllm.multimodal.parse.DictEmbeddingItems(data: Mapping[str, torch.Tensor], modality: str, required_fields: set[str], fields_factory: Callable[[Mapping[str, torch.Tensor]], Mapping[str, MultiModalFieldConfig]])[source]#
Base class for data items that are expressed as a dictionary of tensors.
Usually, the dictionary keys correspond to the outputs of HF processor.
- class vllm.multimodal.parse.AudioProcessorItems(data: Sequence[list[float] | numpy.ndarray | torch.Tensor])[source]#
- class vllm.multimodal.parse.AudioEmbeddingItems(data: torch.Tensor | list[torch.Tensor])[source]#
- class vllm.multimodal.parse.ImageProcessorItems(data: Sequence[Image | numpy.ndarray | torch.Tensor])[source]#
- class vllm.multimodal.parse.ImageEmbeddingItems(data: torch.Tensor | list[torch.Tensor])[source]#
- class vllm.multimodal.parse.VideoProcessorItems(data: Sequence[list[PIL.Image.Image] | numpy.ndarray | torch.Tensor | list[numpy.ndarray] | list[torch.Tensor]])[source]#
- class vllm.multimodal.parse.VideoEmbeddingItems(data: torch.Tensor | list[torch.Tensor])[source]#
- class vllm.multimodal.parse.MultiModalDataItems(dict=None, /, **kwargs)[source]#
As
MultiModalDataDict
, but normalized such that each entry corresponds to a list.
- class vllm.multimodal.parse.MultiModalDataParser(*, target_sr: float | None = None)[source]#
Parses
MultiModalDataDict
intoMultiModalDataItems
.- Parameters:
target_sr (float, optional) – Enables automatic resampling of audio items to the model’s expected sampling rate.