SIGN IN SIGN UP
BerriAI / litellm UNCLAIMED

Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]

0 0 178 Python

docs: link dynamic TPM/RPM limiting to request prioritization doc (#22988)

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
K
Krish Dholakia committed
52ae17746bf8107a4d9dff2e7c73d1eebcc4360b
Parent: cf439c2
Committed by GitHub <noreply@github.com> on 3/8/2026, 3:27:41 AM