vllm.multimodal.inputs
AudioItem
module-attribute
¶
Represents a single audio
item, which can be passed to a HuggingFace AudioProcessor
.
Alternatively, a tuple (audio, sampling_rate)
, where the sampling rate
is different from that expected by the model;
these are resampled to the model's sampling rate before being processed by HF.
Alternatively, a 3-D tensor or batch of 2-D tensors, which are treated as audio embeddings; these are directly passed to the model without HF processing.
BatchedTensorInputs
module-attribute
¶
BatchedTensorInputs: TypeAlias = Mapping[str, NestedTensors]
A dictionary containing nested tensors which have been batched via
MultiModalKwargs.batch
.
HfAudioItem
module-attribute
¶
Represents a single audio
item, which can be passed to a HuggingFace AudioProcessor
.
HfImageItem
module-attribute
¶
A transformers.image_utils.ImageInput
representing a single image
item, which can be passed to a HuggingFace ImageProcessor
.
HfVideoItem
module-attribute
¶
HfVideoItem: TypeAlias = Union[
list["Image"],
ndarray,
"torch.Tensor",
list[ndarray],
list["torch.Tensor"],
]
A transformers.image_utils.VideoInput
representing a single video
item, which can be passed to a HuggingFace VideoProcessor
.
ImageItem
module-attribute
¶
ImageItem: TypeAlias = Union[HfImageItem, 'torch.Tensor']
A transformers.image_utils.ImageInput
representing a single image
item, which can be passed to a HuggingFace ImageProcessor
.
Alternatively, a 3-D tensor or batch of 2-D tensors, which are treated as image embeddings; these are directly passed to the model without HF processing.
ModalityData
module-attribute
¶
Either a single data item, or a list of data items.
The number of data items allowed per modality is restricted by
--limit-mm-per-prompt
.
MultiModalDataDict
module-attribute
¶
MultiModalDataDict: TypeAlias = Mapping[
str, ModalityData[Any]
]
A dictionary containing an entry for each modality type to input.
The built-in modalities are defined by
MultiModalDataBuiltins
.
MultiModalPlaceholderDict
module-attribute
¶
MultiModalPlaceholderDict: TypeAlias = Mapping[
str, Sequence[PlaceholderRange]
]
A dictionary containing placeholder ranges for each modality.
NestedTensors
module-attribute
¶
NestedTensors: TypeAlias = Union[
list["NestedTensors"],
list["torch.Tensor"],
"torch.Tensor",
tuple["torch.Tensor", ...],
]
Uses a list instead of a tensor if the dimensions of each element do not match.
VideoItem
module-attribute
¶
VideoItem: TypeAlias = Union[HfVideoItem, 'torch.Tensor']
A transformers.image_utils.VideoInput
representing a single video
item, which can be passed to a HuggingFace VideoProcessor
.
Alternatively, a 3-D tensor or batch of 2-D tensors, which are treated as video embeddings; these are directly passed to the model without HF processing.
BaseMultiModalField
dataclass
¶
Bases: ABC
Defines how to interpret tensor data belonging to a keyword argument in
MultiModalKwargs
for multiple
multi-modal items, and vice versa.
Source code in vllm/multimodal/inputs.py
_field_factory
¶
Source code in vllm/multimodal/inputs.py
_reduce_data
abstractmethod
¶
_reduce_data(batch: list[NestedTensors]) -> NestedTensors
build_elems
abstractmethod
¶
build_elems(
modality: str, key: str, data: NestedTensors
) -> Sequence[MultiModalFieldElem]
Construct
MultiModalFieldElem
instances to represent the provided data.
This is the inverse of
reduce_data
.
Source code in vllm/multimodal/inputs.py
reduce_data
¶
reduce_data(
elems: list[MultiModalFieldElem],
) -> NestedTensors
Merge the data from multiple instances of
MultiModalFieldElem
.
This is the inverse of
build_elems
.
Source code in vllm/multimodal/inputs.py
MultiModalBatchedField
dataclass
¶
Bases: BaseMultiModalField
Source code in vllm/multimodal/inputs.py
_reduce_data
¶
_reduce_data(batch: list[NestedTensors]) -> NestedTensors
Source code in vllm/multimodal/inputs.py
build_elems
¶
build_elems(
modality: str, key: str, data: NestedTensors
) -> Sequence[MultiModalFieldElem]
MultiModalDataBuiltins
¶
Bases: TypedDict
Type annotations for modality types predefined by vLLM.
Source code in vllm/multimodal/inputs.py
MultiModalEncDecInputs
¶
Bases: MultiModalInputs
Represents the outputs of
EncDecMultiModalProcessor
ready to be passed to vLLM internals.
Source code in vllm/multimodal/inputs.py
encoder_prompt_token_ids
instance-attribute
¶
The processed token IDs of the encoder prompt.
encoder_token_type_ids
instance-attribute
¶
encoder_token_type_ids: NotRequired[list[int]]
The token type IDs of the encoder prompt.
MultiModalFieldConfig
¶
Source code in vllm/multimodal/inputs.py
392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 |
|
__init__
¶
__init__(field: BaseMultiModalField, modality: str) -> None
batched
staticmethod
¶
batched(modality: str)
Defines a field where an element in the batch is obtained by indexing into the first dimension of the underlying data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
modality
|
str
|
The modality of the multi-modal item that uses this keyword argument. |
required |
Example:
Source code in vllm/multimodal/inputs.py
build_elems
¶
build_elems(
key: str, batch: NestedTensors
) -> Sequence[MultiModalFieldElem]
flat
staticmethod
¶
Defines a field where an element in the batch is obtained by slicing along the first dimension of the underlying data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
modality
|
str
|
The modality of the multi-modal item that uses this keyword argument. |
required |
slices
|
Union[Sequence[slice], Sequence[Sequence[slice]]]
|
For each multi-modal item, a slice (dim=0) or a tuple of slices (dim>0) that is used to extract the data corresponding to it. |
required |
dim
|
int
|
The dimension to extract data, default to 0. |
0
|
Example:
Given:
slices: [slice(0, 3), slice(3, 7), slice(7, 9)]
Input:
Data: [AAABBBBCC]
Output:
Element 1: [AAA]
Element 2: [BBBB]
Element 3: [CC]
Given:
slices: [
(slice(None), slice(0, 3)),
(slice(None), slice(3, 7)),
(slice(None), slice(7, 9))]
dim: 1
Input:
Data: [[A],[A],[A],[B],[B],[B],[B],[C],[C]]
Output:
Element 1: [[A],[A],[A]]
Element 2: [[B],[B],[B],[B]]
Element 3: [[C],[C]]
Source code in vllm/multimodal/inputs.py
flat_from_sizes
staticmethod
¶
Defines a field where an element in the batch is obtained by slicing along the first dimension of the underlying data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
modality
|
str
|
The modality of the multi-modal item that uses this keyword argument. |
required |
slices
|
For each multi-modal item, the size of the slice that is used to extract the data corresponding to it. |
required | |
dim
|
int
|
The dimension to slice, default to 0. |
0
|
Example:
Given:
size_per_item: [3, 4, 2]
Input:
Data: [AAABBBBCC]
Output:
Element 1: [AAA]
Element 2: [BBBB]
Element 3: [CC]
Given:
slices: [3, 4, 2]
dim: 1
Input:
Data: [[A],[A],[A],[B],[B],[B],[B],[C],[C]]
Output:
Element 1: [[A],[A],[A]]
Element 2: [[B],[B],[B],[B]]
Element 3: [[C],[C]]
Source code in vllm/multimodal/inputs.py
shared
staticmethod
¶
Defines a field where an element in the batch is obtained by taking the entirety of the underlying data.
This means that the data is the same for each element in the batch.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
modality
|
str
|
The modality of the multi-modal item that uses this keyword argument. |
required |
batch_size
|
int
|
The number of multi-modal items which share this data. |
required |
Example:
Given:
batch_size: 4
Input:
Data: [XYZ]
Output:
Element 1: [XYZ]
Element 2: [XYZ]
Element 3: [XYZ]
Element 4: [XYZ]
Source code in vllm/multimodal/inputs.py
MultiModalFieldElem
dataclass
¶
Represents a keyword argument corresponding to a multi-modal item
in MultiModalKwargs
.
Source code in vllm/multimodal/inputs.py
data
instance-attribute
¶
data: NestedTensors
The tensor data of this field in
MultiModalKwargs
,
i.e. the value of the keyword argument to be passed to the model.
field
instance-attribute
¶
field: BaseMultiModalField
Defines how to combine the tensor data of this field with others in order to batch multi-modal items together for model inference.
key
instance-attribute
¶
key: str
The key of this field in
MultiModalKwargs
,
i.e. the name of the keyword argument to be passed to the model.
modality
instance-attribute
¶
modality: str
The modality of the corresponding multi-modal item. Each multi-modal item can consist of multiple keyword arguments.
__eq__
¶
Source code in vllm/multimodal/inputs.py
__init__
¶
__init__(
modality: str,
key: str,
data: NestedTensors,
field: BaseMultiModalField,
) -> None
MultiModalFlatField
dataclass
¶
Bases: BaseMultiModalField
Source code in vllm/multimodal/inputs.py
__init__
¶
_reduce_data
¶
_reduce_data(batch: list[NestedTensors]) -> NestedTensors
Source code in vllm/multimodal/inputs.py
build_elems
¶
build_elems(
modality: str, key: str, data: NestedTensors
) -> Sequence[MultiModalFieldElem]
Source code in vllm/multimodal/inputs.py
MultiModalInputs
¶
Bases: TypedDict
Represents the outputs of
BaseMultiModalProcessor
,
ready to be passed to vLLM internals.
Source code in vllm/multimodal/inputs.py
cache_salt
instance-attribute
¶
cache_salt: NotRequired[str]
Optional cache salt to be used for prefix caching.
mm_hashes
instance-attribute
¶
mm_hashes: Optional[MultiModalHashDict]
The hashes of the multi-modal data.
mm_kwargs
instance-attribute
¶
mm_kwargs: MultiModalKwargs
Keyword arguments to be directly passed to the model after batching.
mm_placeholders
instance-attribute
¶
mm_placeholders: MultiModalPlaceholderDict
For each modality, information about the placeholder tokens in
prompt_token_ids
.
prompt_token_ids
instance-attribute
¶
The processed token IDs which includes placeholder tokens.
token_type_ids
instance-attribute
¶
token_type_ids: NotRequired[list[int]]
The token type IDs of the prompt.
MultiModalKwargs
¶
Bases: UserDict[str, NestedTensors]
A dictionary that represents the keyword arguments to
torch.nn.Module.forward
.
The metadata items
enables us to obtain the keyword arguments
corresponding to each data item in
MultiModalDataItems
, via
get_item
and
get_items
.
Source code in vllm/multimodal/inputs.py
604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 |
|
__eq__
¶
Source code in vllm/multimodal/inputs.py
__init__
¶
__init__(
data: Mapping[str, NestedTensors],
*,
items: Optional[Sequence[MultiModalKwargsItem]] = None,
) -> None
Source code in vllm/multimodal/inputs.py
_try_stack
staticmethod
¶
_try_stack(
nested_tensors: NestedTensors, pin_memory: bool = False
) -> NestedTensors
Stack the inner dimensions that have the same shape in a nested list of tensors.
Thus, a dimension represented by a list means that the inner dimensions are different for each element along that dimension.
Source code in vllm/multimodal/inputs.py
_validate_modality
¶
Source code in vllm/multimodal/inputs.py
as_kwargs
staticmethod
¶
as_kwargs(
batched_inputs: BatchedTensorInputs, *, device: Device
) -> BatchedTensorInputs
Source code in vllm/multimodal/inputs.py
batch
staticmethod
¶
batch(
inputs_list: list[MultiModalKwargs],
pin_memory: bool = False,
) -> BatchedTensorInputs
Batch multiple inputs together into a dictionary.
The resulting dictionary has the same keys as the inputs. If the corresponding value from each input is a tensor and they all share the same shape, the output value is a single batched tensor; otherwise, the output value is a list containing the original value from each input.
Source code in vllm/multimodal/inputs.py
from_hf_inputs
staticmethod
¶
from_hf_inputs(
hf_inputs: BatchFeature,
config_by_key: Mapping[str, MultiModalFieldConfig],
)
Source code in vllm/multimodal/inputs.py
from_items
staticmethod
¶
from_items(items: Sequence[MultiModalKwargsItem])
Construct a new
MultiModalKwargs
from multiple items.
Source code in vllm/multimodal/inputs.py
get_item
¶
get_item(
modality: str, item_index: int
) -> MultiModalKwargsItem
Get the keyword arguments corresponding to an item identified by its modality and index.
Source code in vllm/multimodal/inputs.py
get_item_count
¶
Get the number of items belonging to a modality.
get_items
¶
get_items(modality: str) -> Sequence[MultiModalKwargsItem]
Get the keyword arguments corresponding to each item belonging to a modality.
Source code in vllm/multimodal/inputs.py
MultiModalKwargsItem
¶
Bases: UserDict[str, MultiModalFieldElem]
A collection of
MultiModalFieldElem
corresponding to a data item in
MultiModalDataItems
.
Source code in vllm/multimodal/inputs.py
MultiModalSharedField
dataclass
¶
Bases: BaseMultiModalField
Source code in vllm/multimodal/inputs.py
_reduce_data
¶
_reduce_data(batch: list[NestedTensors]) -> NestedTensors
build_elems
¶
build_elems(
modality: str, key: str, data: NestedTensors
) -> Sequence[MultiModalFieldElem]
PlaceholderRange
dataclass
¶
Placeholder location information for multi-modal data.
Example:
Prompt: AAAA BBBB What is in these images?
Images A and B will have:
Source code in vllm/multimodal/inputs.py
is_embed
class-attribute
instance-attribute
¶
A boolean mask of shape (length,)
indicating which positions
between offset
and offset + length
to assign embeddings to.
__eq__
¶
Source code in vllm/multimodal/inputs.py
nested_tensors_equal
¶
nested_tensors_equal(
a: NestedTensors, b: NestedTensors
) -> bool
Equality check between
NestedTensors
objects.