A high-throughput and memory-efficient inference and serving engine for LLMs
https://anaconda.org/services/vllm/badges/version.svg
https://anaconda.org/services/vllm/badges/latest_release_date.svg
https://anaconda.org/services/vllm/badges/latest_release_relative_date.svg
https://anaconda.org/services/vllm/badges/platforms.svg
https://anaconda.org/services/vllm/badges/license.svg
https://anaconda.org/services/vllm/badges/downloads.svg