CMD + K

llmlingua

Community

To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.

Installation

To install this package, run one of the following:

Installation commands are not available for this package.

Usage Tracking

0 / 8 versions selected
Downloads (Last 6 months): 0

About

Summary

To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.

Last Updated

Jan 5, 2025 at 14:00

License

MIT

Total Downloads

0