vllm.model_executor.guided_decoding.outlines_logits_processors
BaseLogitsProcessor
¶
Source code in vllm/model_executor/guided_decoding/outlines_logits_processors.py
__call__
¶
Source code in vllm/model_executor/guided_decoding/outlines_logits_processors.py
__init__
¶
__init__(
guide: Guide,
eos_token_id: int,
reasoner: Optional[ReasoningParser],
) -> None
Source code in vllm/model_executor/guided_decoding/outlines_logits_processors.py
clone
¶
clone() -> BaseLogitsProcessor
JSONLogitsProcessor
¶
Bases: RegexLogitsProcessor
Source code in vllm/model_executor/guided_decoding/outlines_logits_processors.py
__init__
¶
__init__(
schema: Union[str, dict, BaseModel],
tokenizer: PreTrainedTokenizerBase,
whitespace_pattern: Union[str, None],
reasoner: Optional[ReasoningParser],
) -> None
Source code in vllm/model_executor/guided_decoding/outlines_logits_processors.py
OutlinesVocabulary
¶
Wrapper class for outlines_core.Vocabulary
,
which allows us to store a hash with the vocabulary
Source code in vllm/model_executor/guided_decoding/outlines_logits_processors.py
__init__
¶
Source code in vllm/model_executor/guided_decoding/outlines_logits_processors.py
RegexLogitsProcessor
¶
Bases: BaseLogitsProcessor
Source code in vllm/model_executor/guided_decoding/outlines_logits_processors.py
__init__
¶
__init__(
regex_string: str,
tokenizer: PreTrainedTokenizerBase,
reasoner: Optional[ReasoningParser],
) -> None
Source code in vllm/model_executor/guided_decoding/outlines_logits_processors.py
_get_guide
classmethod
¶
_get_guide(
regex_string: str, tokenizer: PreTrainedTokenizerBase
) -> Guide
Source code in vllm/model_executor/guided_decoding/outlines_logits_processors.py
_reduced_vocabulary
¶
Create a map from vocabulary tokens to lists of equivalent token ids.
Returns:
Type | Description |
---|---|
dict[bytes, list[int]]
|
A Dict of token string -> equivalent token ids |
Source code in vllm/model_executor/guided_decoding/outlines_logits_processors.py
get_cache
¶
Get the Cache instance to be used for index caching
Source code in vllm/model_executor/guided_decoding/outlines_logits_processors.py
get_cache_path
¶
get_cache_path() -> str
Get the context object that contains previously-computed return values
Source code in vllm/model_executor/guided_decoding/outlines_logits_processors.py
get_vocabulary
¶
get_vocabulary(tokenizer: AnyTokenizer) -> Vocabulary
Get the Vocabulary
object for a given tokenizer.