Data Processing#
Module Contents#
- class vllm.multimodal.processing.PromptReplacement(modality: str, target: str | list[int], replacement: Callable[[int], str | list[int]] | str | list[int])[source][source]#
Defines how to replace portions of an input prompt with placeholder tokens.
- replacement: Callable[[int], str | list[int]] | str | list[int][source]#
Given the index of the processed item within
modality
, output the replacement token sequence (or text).For convenience, you can directly pass in the replacement token sequence (or text) instead of a function if it does not depend on the input.
- vllm.multimodal.processing.full_groupby_modality(values: Iterable[_M]) ItemsView[str, list[_M]] [source][source]#
Convenience function to apply
full_groupby()
based on modality.
- class vllm.multimodal.processing.BoundPromptReplacement(tokenizer: transformers.PreTrainedTokenizer | transformers.PreTrainedTokenizerFast | MistralTokenizer, modality: str, _target: str | list[int], _replacement: Callable[[int], str | list[int]] | str | list[int])[source][source]#
A
PromptReplacement
bound to a tokenizer to automatically converttarget
and the result ofget_replacement()
between token sequence and text representations.
- vllm.multimodal.processing.iter_token_matches(token_ids: list[int], match_ids: list[int]) Iterable[_TokenMatch] [source][source]#
Yield each occurrence of
match_ids
intoken_ids
.Note that empty matches are ignored.
- class vllm.multimodal.processing.PlaceholderInfo(modality: str, item_idx: int, start_idx: int, replacement: list[int])[source][source]#
- vllm.multimodal.processing.find_token_matches(prompt: list[int], prompt_repls: Sequence[BoundPromptReplacement]) list[vllm.multimodal.processing._PromptReplacementTokenMatch] [source][source]#
Return each target of
prompt_repls
found inprompt
.
- vllm.multimodal.processing.find_text_matches(prompt: str, prompt_repls: Sequence[BoundPromptReplacement]) list[vllm.multimodal.processing._PromptReplacementTextMatch] [source][source]#
Return each target of
prompt_repls
found inprompt
.
- vllm.multimodal.processing.replace_token_matches(prompt: list[int], mm_matches: Mapping[str, Sequence[_PromptReplacementTokenMatch]], mm_item_counts: Mapping[str, int]) list[int] [source][source]#
Apply the replacements in
mm_matches
toprompt
.
- vllm.multimodal.processing.replace_text_matches(prompt: str, mm_matches: Mapping[str, Sequence[_PromptReplacementTextMatch]], mm_item_counts: Mapping[str, int]) str [source][source]#
Apply the replacements in
mm_matches
toprompt
.
- class vllm.multimodal.processing.BaseProcessingInfo(ctx: InputProcessingContext)[source][source]#
Base class to provide the information necessary for data processing.
- get_hf_processor(**kwargs: object) transformers.ProcessorMixin [source][source]#
Subclasses can override this method to handle specific kwargs from model config or user inputs.
- class vllm.multimodal.processing.BaseMultiModalProcessor(info: _I, dummy_inputs: BaseDummyInputsBuilder[_I], *, cache: ProcessingCache | None = None, enable_sanity_checks: bool = True)[source][source]#
Abstract base class to process multi-modal inputs to be used in vLLM.
Not to be confused with
transformers.ProcessorMixin
.- apply(prompt: str | list[int], mm_data: Mapping[str, Any | list[Any]], hf_processor_mm_kwargs: Mapping[str, object]) MultiModalInputsV2 [source][source]#
Process multi-modal inputs to be used in vLLM.
The main steps are:
Apply HF Processor on prompt text and multi-modal data together, outputting token IDs and processed tensors.
Find and replace sequences in the token IDs with placeholder tokens. The number of placeholder tokens equals the feature size of the multi-modal data outputted by the multi-modal encoder.
Extract information about the placeholder tokens from the processed token IDs.