CMD + K

llmlingua

Community

To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.

items per page1 - 4 of 4 items

Filters

to
to