vllm.entrypoints.openai.api_server
INVOCATION_TYPES
module-attribute
¶
INVOCATION_TYPES: list[
tuple[RequestType, tuple[GetHandlerFn, EndpointFn]]
] = [
(ChatCompletionRequest, (chat, create_chat_completion)),
(CompletionRequest, (completion, create_completion)),
(EmbeddingRequest, (embedding, create_embedding)),
(ClassificationRequest, (classify, create_classify)),
(ScoreRequest, (score, create_score)),
(RerankRequest, (rerank, do_rerank)),
(PoolingRequest, (pooling, create_pooling)),
]
INVOCATION_VALIDATORS
module-attribute
¶
INVOCATION_VALIDATORS = [
(TypeAdapter(request_type), (get_handler, endpoint))
for (
request_type,
(get_handler, endpoint),
) in INVOCATION_TYPES
]
parser
module-attribute
¶
parser = FlexibleArgumentParser(
description="vLLM OpenAI-Compatible RESTful API server."
)
AuthenticationMiddleware
¶
Pure ASGI middleware that authenticates each request by checking if the Authorization header exists and equals "Bearer {api_key}".
Notes¶
There are two cases in which authentication is skipped: 1. The HTTP method is OPTIONS. 2. The request path doesn't start with /v1 (e.g. /health).
Source code in vllm/entrypoints/openai/api_server.py
__call__
¶
__call__(
scope: Scope, receive: Receive, send: Send
) -> Awaitable[None]
Source code in vllm/entrypoints/openai/api_server.py
PrometheusResponse
¶
SSEDecoder
¶
Robust Server-Sent Events decoder for streaming responses.
Source code in vllm/entrypoints/openai/api_server.py
__init__
¶
decode_chunk
¶
Decode a chunk of SSE data and return parsed events.
Source code in vllm/entrypoints/openai/api_server.py
extract_content
¶
XRequestIdMiddleware
¶
Middleware the set's the X-Request-Id header for each response to a random uuid4 (hex) value if the header isn't already present in the request, otherwise use the provided request id.
Source code in vllm/entrypoints/openai/api_server.py
__call__
¶
__call__(
scope: Scope, receive: Receive, send: Send
) -> Awaitable[None]
Source code in vllm/entrypoints/openai/api_server.py
_extract_content_from_chunk
¶
Extract content from a streaming response chunk.
Source code in vllm/entrypoints/openai/api_server.py
_log_non_streaming_response
¶
_log_non_streaming_response(response_body: list) -> None
Log non-streaming response.
Source code in vllm/entrypoints/openai/api_server.py
_log_streaming_response
¶
_log_streaming_response(
response, response_body: list
) -> None
Log streaming response with robust SSE parsing.
Source code in vllm/entrypoints/openai/api_server.py
base
¶
base(request: Request) -> OpenAIServing
build_app
¶
build_app(args: Namespace) -> FastAPI
Source code in vllm/entrypoints/openai/api_server.py
1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 |
|
build_async_engine_client
async
¶
build_async_engine_client(
args: Namespace,
client_config: Optional[dict[str, Any]] = None,
) -> AsyncIterator[EngineClient]
Source code in vllm/entrypoints/openai/api_server.py
build_async_engine_client_from_engine_args
async
¶
build_async_engine_client_from_engine_args(
engine_args: AsyncEngineArgs,
disable_frontend_multiprocessing: bool = False,
client_config: Optional[dict[str, Any]] = None,
) -> AsyncIterator[EngineClient]
Create EngineClient, either: - in-process using the AsyncLLMEngine Directly - multiprocess using AsyncLLMEngine RPC
Returns the Client or None if the creation failed.
Source code in vllm/entrypoints/openai/api_server.py
164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 |
|
cancel_responses
async
¶
cancel_responses(response_id: str, raw_request: Request)
Source code in vllm/entrypoints/openai/api_server.py
chat
¶
chat(request: Request) -> Optional[OpenAIServingChat]
classify
¶
classify(
request: Request,
) -> Optional[ServingClassification]
completion
¶
completion(
request: Request,
) -> Optional[OpenAIServingCompletion]
create_chat_completion
async
¶
create_chat_completion(
request: ChatCompletionRequest, raw_request: Request
)
Source code in vllm/entrypoints/openai/api_server.py
create_classify
async
¶
create_classify(
request: ClassificationRequest, raw_request: Request
)
Source code in vllm/entrypoints/openai/api_server.py
create_completion
async
¶
create_completion(
request: CompletionRequest, raw_request: Request
)
Source code in vllm/entrypoints/openai/api_server.py
create_embedding
async
¶
create_embedding(
request: EmbeddingRequest, raw_request: Request
)
Source code in vllm/entrypoints/openai/api_server.py
create_pooling
async
¶
create_pooling(
request: PoolingRequest, raw_request: Request
)
Source code in vllm/entrypoints/openai/api_server.py
create_responses
async
¶
create_responses(
request: ResponsesRequest, raw_request: Request
)
Source code in vllm/entrypoints/openai/api_server.py
create_score
async
¶
create_score(request: ScoreRequest, raw_request: Request)
Source code in vllm/entrypoints/openai/api_server.py
create_score_v1
async
¶
create_score_v1(
request: ScoreRequest, raw_request: Request
)
Source code in vllm/entrypoints/openai/api_server.py
create_server_socket
¶
Source code in vllm/entrypoints/openai/api_server.py
create_transcriptions
async
¶
create_transcriptions(
raw_request: Request,
request: Annotated[TranscriptionRequest, Form()],
)
Source code in vllm/entrypoints/openai/api_server.py
create_translations
async
¶
create_translations(
request: Annotated[TranslationRequest, Form()],
raw_request: Request,
)
Source code in vllm/entrypoints/openai/api_server.py
detokenize
async
¶
detokenize(
request: DetokenizeRequest, raw_request: Request
)
Source code in vllm/entrypoints/openai/api_server.py
do_rerank
async
¶
do_rerank(request: RerankRequest, raw_request: Request)
Source code in vllm/entrypoints/openai/api_server.py
do_rerank_v1
async
¶
do_rerank_v1(request: RerankRequest, raw_request: Request)
Source code in vllm/entrypoints/openai/api_server.py
do_rerank_v2
async
¶
do_rerank_v2(request: RerankRequest, raw_request: Request)
Source code in vllm/entrypoints/openai/api_server.py
embedding
¶
embedding(
request: Request,
) -> Optional[OpenAIServingEmbedding]
engine_client
¶
engine_client(request: Request) -> EngineClient
get_server_load_metrics
async
¶
Source code in vllm/entrypoints/openai/api_server.py
health
async
¶
init_app_state
async
¶
init_app_state(
engine_client: EngineClient,
vllm_config: VllmConfig,
state: State,
args: Namespace,
) -> None
Source code in vllm/entrypoints/openai/api_server.py
1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 |
|
invocations
async
¶
For SageMaker, routes requests based on the request type.
Source code in vllm/entrypoints/openai/api_server.py
is_sleeping
async
¶
Source code in vllm/entrypoints/openai/api_server.py
lifespan
async
¶
Source code in vllm/entrypoints/openai/api_server.py
load_log_config
¶
Source code in vllm/entrypoints/openai/api_server.py
load_lora_adapter
async
¶
load_lora_adapter(
request: LoadLoRAAdapterRequest, raw_request: Request
)
Source code in vllm/entrypoints/openai/api_server.py
maybe_register_tokenizer_info_endpoint
¶
Conditionally register the tokenizer info endpoint if enabled.
Source code in vllm/entrypoints/openai/api_server.py
models
¶
models(request: Request) -> OpenAIServingModels
mount_metrics
¶
Mount prometheus metrics to a FastAPI app.
Source code in vllm/entrypoints/openai/api_server.py
ping
async
¶
Ping check. Endpoint required for SageMaker
pooling
¶
pooling(request: Request) -> Optional[OpenAIServingPooling]
rerank
¶
rerank(request: Request) -> Optional[ServingScores]
reset_prefix_cache
async
¶
Reset the prefix cache. Note that we currently do not check if the prefix cache is successfully reset in the API server.
Source code in vllm/entrypoints/openai/api_server.py
responses
¶
responses(
request: Request,
) -> Optional[OpenAIServingResponses]
retrieve_responses
async
¶
retrieve_responses(response_id: str, raw_request: Request)
Source code in vllm/entrypoints/openai/api_server.py
run_server
async
¶
Run a single-worker API server.
run_server_worker
async
¶
Run a single API server worker.
Source code in vllm/entrypoints/openai/api_server.py
score
¶
score(request: Request) -> Optional[ServingScores]
setup_server
¶
Validate API server args, set up signal handler, create socket ready to serve.
Source code in vllm/entrypoints/openai/api_server.py
show_available_models
async
¶
show_server_info
async
¶
show_version
async
¶
sleep
async
¶
Source code in vllm/entrypoints/openai/api_server.py
start_profile
async
¶
Source code in vllm/entrypoints/openai/api_server.py
stop_profile
async
¶
Source code in vllm/entrypoints/openai/api_server.py
tokenization
¶
tokenization(request: Request) -> OpenAIServingTokenization
tokenize
async
¶
tokenize(request: TokenizeRequest, raw_request: Request)
Source code in vllm/entrypoints/openai/api_server.py
transcription
¶
transcription(
request: Request,
) -> OpenAIServingTranscription
translation
¶
translation(request: Request) -> OpenAIServingTranslation
unload_lora_adapter
async
¶
unload_lora_adapter(
request: UnloadLoRAAdapterRequest, raw_request: Request
)