vllm.multimodal
Modules:
Name | Description |
---|---|
audio |
|
base |
|
hasher |
|
image |
|
inputs |
|
parse |
|
processing |
|
profiling |
|
registry |
|
utils |
|
video |
|
BatchedTensorInputs
module-attribute
¶
BatchedTensorInputs: TypeAlias = Mapping[str, NestedTensors]
A dictionary containing nested tensors which have been batched via
MultiModalKwargs.batch
.
MULTIMODAL_REGISTRY
module-attribute
¶
MULTIMODAL_REGISTRY = MultiModalRegistry()
The global MultiModalRegistry
is used by model runners to dispatch data processing according to the target
model.
Info
ModalityData
module-attribute
¶
Either a single data item, or a list of data items.
The number of data items allowed per modality is restricted by
--limit-mm-per-prompt
.
MultiModalDataDict
module-attribute
¶
MultiModalDataDict: TypeAlias = Mapping[
str, ModalityData[Any]
]
A dictionary containing an entry for each modality type to input.
The built-in modalities are defined by
MultiModalDataBuiltins
.
MultiModalHashDict
module-attribute
¶
A dictionary containing hashes for items in each modality.
MultiModalPlaceholderDict
module-attribute
¶
MultiModalPlaceholderDict: TypeAlias = Mapping[
str, Sequence[PlaceholderRange]
]
A dictionary containing placeholder ranges for each modality.
NestedTensors
module-attribute
¶
NestedTensors: TypeAlias = Union[
list["NestedTensors"],
list["torch.Tensor"],
"torch.Tensor",
tuple["torch.Tensor", ...],
]
Uses a list instead of a tensor if the dimensions of each element do not match.
__all__
module-attribute
¶
__all__ = [
"BatchedTensorInputs",
"ModalityData",
"MultiModalDataBuiltins",
"MultiModalDataDict",
"MultiModalHashDict",
"MultiModalHasher",
"MultiModalKwargs",
"MultiModalPlaceholderDict",
"MultiModalPlaceholderMap",
"NestedTensors",
"MULTIMODAL_REGISTRY",
"MultiModalRegistry",
]
MultiModalDataBuiltins
¶
Bases: TypedDict
Type annotations for modality types predefined by vLLM.
Source code in vllm/multimodal/inputs.py
MultiModalHasher
¶
Source code in vllm/multimodal/hasher.py
hash_kwargs
classmethod
¶
Source code in vllm/multimodal/hasher.py
item_to_bytes
classmethod
¶
iter_item_to_bytes
classmethod
¶
Source code in vllm/multimodal/hasher.py
serialize_item
classmethod
¶
serialize_item(obj: object) -> Union[bytes, memoryview]
Source code in vllm/multimodal/hasher.py
MultiModalKwargs
¶
Bases: UserDict[str, NestedTensors]
A dictionary that represents the keyword arguments to
torch.nn.Module.forward
.
The metadata items
enables us to obtain the keyword arguments
corresponding to each data item in
MultiModalDataItems
, via
get_item
and
get_items
.
Source code in vllm/multimodal/inputs.py
604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 |
|
__eq__
¶
Source code in vllm/multimodal/inputs.py
__init__
¶
__init__(
data: Mapping[str, NestedTensors],
*,
items: Optional[Sequence[MultiModalKwargsItem]] = None,
) -> None
Source code in vllm/multimodal/inputs.py
_try_stack
staticmethod
¶
_try_stack(
nested_tensors: NestedTensors, pin_memory: bool = False
) -> NestedTensors
Stack the inner dimensions that have the same shape in a nested list of tensors.
Thus, a dimension represented by a list means that the inner dimensions are different for each element along that dimension.
Source code in vllm/multimodal/inputs.py
_validate_modality
¶
Source code in vllm/multimodal/inputs.py
as_kwargs
staticmethod
¶
as_kwargs(
batched_inputs: BatchedTensorInputs, *, device: Device
) -> BatchedTensorInputs
Source code in vllm/multimodal/inputs.py
batch
staticmethod
¶
batch(
inputs_list: list[MultiModalKwargs],
pin_memory: bool = False,
) -> BatchedTensorInputs
Batch multiple inputs together into a dictionary.
The resulting dictionary has the same keys as the inputs. If the corresponding value from each input is a tensor and they all share the same shape, the output value is a single batched tensor; otherwise, the output value is a list containing the original value from each input.
Source code in vllm/multimodal/inputs.py
from_hf_inputs
staticmethod
¶
from_hf_inputs(
hf_inputs: BatchFeature,
config_by_key: Mapping[str, MultiModalFieldConfig],
)
Source code in vllm/multimodal/inputs.py
from_items
staticmethod
¶
from_items(items: Sequence[MultiModalKwargsItem])
Construct a new
MultiModalKwargs
from multiple items.
Source code in vllm/multimodal/inputs.py
get_item
¶
get_item(
modality: str, item_index: int
) -> MultiModalKwargsItem
Get the keyword arguments corresponding to an item identified by its modality and index.
Source code in vllm/multimodal/inputs.py
get_item_count
¶
Get the number of items belonging to a modality.
get_items
¶
get_items(modality: str) -> Sequence[MultiModalKwargsItem]
Get the keyword arguments corresponding to each item belonging to a modality.
Source code in vllm/multimodal/inputs.py
MultiModalPlaceholderMap
¶
Relates multi-modal embeddings to their corresponding placeholders.
Note: This is only used in V0.
Source code in vllm/multimodal/base.py
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 |
|
dest_len
instance-attribute
¶
dest_len: int = 0
The total number of embeddings in the destination tensor.
dest_ranges
instance-attribute
¶
The indices of the placeholder embeddings that will be replaced by the multimodal embeddings.
src_ranges
instance-attribute
¶
The indices of the multi-modal embeddings that will replace the
corresponding placeholder embeddings pointed to by dest_ranges
.
IndexMap
¶
__init__
¶
append_items_from_seq_group
¶
append_items_from_seq_group(
positions: range,
multi_modal_items: list[_T],
multi_modal_placeholders: Sequence[PlaceholderRange],
) -> list[_T]
Adds the multi-modal items that intersect `positions
to this
placeholder map and returns the intersecting items.
Source code in vllm/multimodal/base.py
extend
¶
extend(other: MultiModalPlaceholderMap)
Adds the placeholders from another MultiModalPlaceholderMap
to this
instance based on the source and destination tensors being
concatenated.
Source code in vllm/multimodal/base.py
from_seq_group
classmethod
¶
from_seq_group(
seq_group: SequenceGroupMetadata, positions: range
) -> tuple[
MultiModalKwargs, dict[str, MultiModalPlaceholderMap]
]
Returns the multi-modal items that intersect with the portion of a
prompt (seq_group
) represented by positions
, as well as a
MultiModalPlaceholderMap
that relates the multi-modal embedding
vectors to their corresponding placeholders.
Examples:
Prompt: |AAAA BBBB What's in these images?|
Positions: |.................................|
images = [A, B]
src_ranges = [(0, 4), (4, 8)]
dest_ranges = [(0, 4), (5, 9)]
Prompt: |AAAA BBBB What's in these images?|
Positions: | ..... |
images = [A, B]
src_ranges = [(2, 4), (4, 6)]
dest_ranges = [(0, 2), (3, 5)]
Prompt: |AAAA BBBB What's in these images?|
Positions: | ......... |
images = [B]
src_ranges = [(0, 4)]
dest_ranges = [(0, 4)]
Prompt: |AAAA BBBB What's in these images?|
Positions: | .......................|
images = []
src_ranges = []
dest_ranges = []
Source code in vllm/multimodal/base.py
index_map
¶
index_map() -> IndexMap
Finalizes the placeholder map into lists of indices that can be used to index the source and destination tensors.
Source code in vllm/multimodal/base.py
MultiModalRegistry
¶
A registry that dispatches data processing according to the model.
Source code in vllm/multimodal/registry.py
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 |
|
_processor_factories
instance-attribute
¶
_processor_factories = ClassRegistry[
Module, _ProcessorFactories
]()
__init__
¶
_get_model_cls
¶
_get_model_cls(model_config: ModelConfig)
create_input_mapper
¶
create_input_mapper(model_config: ModelConfig)
Source code in vllm/multimodal/registry.py
create_processor
¶
create_processor(
model_config: ModelConfig,
*,
tokenizer: Optional[AnyTokenizer] = None,
disable_cache: Optional[bool] = None,
) -> BaseMultiModalProcessor[BaseProcessingInfo]
Create a multi-modal processor for a specific model and tokenizer.
Source code in vllm/multimodal/registry.py
get_decoder_dummy_data
¶
get_decoder_dummy_data(
model_config: ModelConfig,
seq_len: int,
mm_counts: Optional[Mapping[str, int]] = None,
) -> DummyDecoderData
Create dummy data for profiling the memory usage of a model.
The model is identified by model_config
.
Source code in vllm/multimodal/registry.py
get_encoder_dummy_data
¶
get_encoder_dummy_data(
model_config: ModelConfig,
seq_len: int,
mm_counts: Optional[Mapping[str, int]] = None,
) -> DummyEncoderData
Create dummy data for profiling the memory usage of a model.
The model is identified by model_config
.
Source code in vllm/multimodal/registry.py
get_max_multimodal_tokens
¶
get_max_multimodal_tokens(model_config: ModelConfig) -> int
Get the maximum number of multi-modal tokens for profiling the memory usage of a model.
Source code in vllm/multimodal/registry.py
get_max_tokens_by_modality
¶
get_max_tokens_by_modality(
model_config: ModelConfig,
) -> Mapping[str, int]
Get the maximum number of tokens from each modality for profiling the memory usage of a model.
Source code in vllm/multimodal/registry.py
get_max_tokens_per_item_by_modality
¶
get_max_tokens_per_item_by_modality(
model_config: ModelConfig,
) -> Mapping[str, int]
Get the maximum number of tokens per data item from each modality based on underlying model configuration.
Source code in vllm/multimodal/registry.py
get_max_tokens_per_item_by_nonzero_modality
¶
get_max_tokens_per_item_by_nonzero_modality(
model_config: ModelConfig,
) -> Mapping[str, int]
Get the maximum number of tokens per data item from each modality based
on underlying model configuration, excluding modalities that user
explicitly disabled via limit_mm_per_prompt
.
Note
This is currently directly used only in V1 for profiling the memory usage of a model.
Source code in vllm/multimodal/registry.py
get_mm_limits_per_prompt
¶
get_mm_limits_per_prompt(
model_config: ModelConfig,
) -> Mapping[str, int]
Get the maximum number of multi-modal input instances for each modality that are allowed per prompt for a model class.
Source code in vllm/multimodal/registry.py
has_processor
¶
has_processor(model_config: ModelConfig) -> bool
Source code in vllm/multimodal/registry.py
init_mm_limits_per_prompt
¶
init_mm_limits_per_prompt(
model_config: ModelConfig,
) -> None
Source code in vllm/multimodal/registry.py
register_processor
¶
register_processor(
processor: MultiModalProcessorFactory[_I" optional hover>_I],
*,
info: ProcessingInfoFactory[_I" optional hover>_I],
dummy_inputs: DummyInputsBuilderFactory[_I],
)
Register a multi-modal processor to a model class. The processor is constructed lazily, hence a factory method should be passed.
When the model receives multi-modal data, the provided function is invoked to transform the data into a dictionary of model inputs.