SIGN IN SIGN UP
BerriAI / litellm UNCLAIMED

Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]

41086 0 1 Python

fix(test): make test_google_endpoint_routing resilient to module reloads (#24499)

Same root cause as test_end_user_jwt_auth: test_cors_config reloads
proxy_server module, so initialize() sets llm_router on the old module
while the test's patch target resolves to the new one where it's None.

Fix by syncing llm_router from the original module to the live
sys.modules entry after initialize().

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
A
Aarish Alam committed
92a6507d655c6d7ecc5f2993c8a71cf07899e3b7
Parent: de7763f
Committed by GitHub <noreply@github.com> on 3/24/2026, 8:13:56 AM