CMD + K

vllm

Community

A high-throughput and memory-efficient inference and serving engine for LLMs

Installation

To install this package, run one of the following:

Conda
$conda install services::vllm

Usage Tracking

0.8.1
1 / 8 versions selected
Downloads (Last 6 months): 0

About

Summary

A high-throughput and memory-efficient inference and serving engine for LLMs

Last Updated

Jul 30, 2025 at 16:25

License

Apache-2.0

Total Downloads

86

Supported Platforms

linux-64