SIGN IN SIGN UP
BerriAI / litellm UNCLAIMED

Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]

41086 0 1 Python

fix(auth): make post-custom-auth checks opt-in via litellm_settings

Make `_run_post_custom_auth_checks()` opt-in behind
`enable_post_custom_auth_checks` in litellm_settings, rather than
running unconditionally on every custom auth return path.

Users who need post-custom-auth DB lookups (end_user budgets, token
expiry, team/org checks) can enable it via:

  litellm_settings:
    enable_post_custom_auth_checks: true

This resolves a ~44% throughput regression for deployments using custom
auth where these checks are unnecessary.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
K
Krrish Dholakia committed
a9e6ae33c1fecd97e034d3ec24cf61164e565fec
Parent: 90b850e