Library to easily interface with LLM API providers
Translate inputs to provider's completion, embedding, and image_generation endpoints Consistent output, text responses will always be available at ['choices'][0] Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - Router Track spend & set budgets per project LiteLLM Proxy Server