vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
A high-throughput and memory-efficient inference and serving engine for LLMs
To install this package, run one of the following:
Easy, fast, and cheap LLM serving for everyone
Summary
A high-throughput and memory-efficient inference and serving engine for LLMs
Last Updated
Jan 11, 2026 at 10:24
License
Apache-2.0 AND BSD-3-Clause
Total Downloads
17.0K
Supported Platforms
Documentation
https://vllm.readthedocs.io/en/latest/