SIGN IN SIGN UP
BerriAI / litellm UNCLAIMED

Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]

0 0 1 Python
[tool.poetry]
name = "litellm"
2025-05-13 19:55:53 -07:00
version = "1.69.3"
description = "Library to easily interface with LLM API providers"
authors = ["BerriAI"]
license = "MIT"
readme = "README.md"
2024-05-30 14:36:04 -04:00
packages = [
{ include = "litellm" },
{ include = "litellm/py.typed"},
]
[tool.poetry.urls]
homepage = "https://litellm.ai"
Homepage = "https://litellm.ai"
repository = "https://github.com/BerriAI/litellm"
Repository = "https://github.com/BerriAI/litellm"
documentation = "https://docs.litellm.ai"
Documentation = "https://docs.litellm.ai"
[tool.poetry.dependencies]
2024-02-12 19:49:11 +02:00
python = ">=3.8.1,<4.0, !=3.9.7"
httpx = ">=0.23.0"
openai = ">=1.68.2, <1.76.0"
python-dotenv = ">=0.2.0"
tiktoken = ">=0.7.0"
2023-09-25 07:33:23 -07:00
importlib-metadata = ">=6.8.0"
2023-09-25 09:15:50 -07:00
tokenizers = "*"
2023-09-26 13:30:35 -07:00
click = "*"
2023-10-03 21:05:20 -07:00
jinja2 = "^3.1.2"
aiohttp = "*"
pydantic = "^2.0.0"
jsonschema = "^4.22.0"
2023-09-26 13:30:35 -07:00
uvicorn = {version = "^0.29.0", optional = true}
uvloop = {version = "^0.21.0", optional = true, markers="sys_platform != 'win32'"}
gunicorn = {version = "^23.0.0", optional = true}
fastapi = {version = "^0.115.5", optional = true}
2023-11-30 16:23:34 -07:00
backoff = {version = "*", optional = true}
pyyaml = {version = "^6.0.1", optional = true}
2023-11-30 16:23:34 -07:00
rq = {version = "*", optional = true}
2023-12-28 12:59:27 +05:30
orjson = {version = "^3.9.7", optional = true}
apscheduler = {version = "^3.10.4", optional = true}
fastapi-sso = { version = "^0.16.0", optional = true }
PyJWT = { version = "^2.8.0", optional = true }
python-multipart = { version = "^0.0.18", optional = true}
cryptography = {version = "^43.0.1", optional = true}
2024-03-23 17:32:57 +02:00
prisma = {version = "0.11.0", optional = true}
azure-identity = {version = "^1.15.0", optional = true}
azure-keyvault-secrets = {version = "^4.8.0", optional = true}
google-cloud-kms = {version = "^2.21.3", optional = true}
resend = {version = "^0.8.0", optional = true}
2024-07-06 15:30:27 -07:00
pynacl = {version = "^1.5.0", optional = true}
2025-03-13 14:38:22 -07:00
websockets = {version = "^13.1.0", optional = true}
boto3 = {version = "1.34.34", optional = true}
2025-03-24 18:59:32 -07:00
redisvl = {version = "^0.4.1", optional = true, markers = "python_version >= '3.9' and python_version < '3.14'"}
mcp = {version = "1.5.0", optional = true, python = ">=3.10"}
litellm-proxy-extras = {version = "0.1.21", optional = true}
Add `litellm-proxy` CLI (#10478) (#10578) * Add `litellm-proxy` CLI (#10478) * First cut at a Python client module for proxy * Add UnauthorizedError + add_model method * Add delete_model method * Add example model_id to delete_model docstring * Make delete_model raise NotFoundError * Add get_model * Add get_all_model_info * Rename models.list_models to models.list * Rename models.get_all_model_info to models.info * Move ModelsManagementClient.get_all_model_group_info to ModelGroupsManagementClient.info * Rename get_model to get * Rename add_model to new * Rename delete_model to delete * In client classes, rename base_url attribute to _base_url and api_key attribute to _api_key * Add ModelsManagementClient.updae method * Add client.chat.completions (ChatClient) * ruff format litellm/proxy/client * ruff format tests/litellm/proxy/client/*.py * Add latest changes * Rename KeysManagementClient.create to KeysManagementClient.generate * Add new parameters to KeysManagementClient.generate * Add CredentialsManagementClient * Remove api_key parameter from KeysManagementClient.generate * Fix lint errors * Add litellm/proxy/client/README.md * README.md: Remove api_key param to client.keys.generate * Fix mypy errors * First cut at litellm-proxy cli * Add test for `litellm-proxy models list` * Nicer get_models_info * get_models_info: --columns option * Use format_timestamp in list_models * ruff format litellm/proxy/client * Simpler JSON printing with rich.print_json * Move models-related commands to separate file From `cli.py` to `groups/models.py` * Improve directory structure * Cleanup cli/groups/models.py - esp. usage of rich * Refactoring * Refactor mocking in cli/test_main.py * Dedup models commands tests * Update poetry.lock * Fix mypy errors * ruff format litellm/proxy/client/cli * ruff format tests/litellm/proxy/client/*.py * Fix timezone issue in test_models_list_table_format * Add cli/README.md * Small README.md tweaks * README.md enhancements * Add credentials commands * Add chat commands * Add http commands * ruff format litellm/proxy/client/cli * Fix lint errors in credentials and http commands * json => json_lib * test-key => sk-test-key * Mock HTTP responses so http command tests pass * Fix mypy error in credentials.py * bump: version 1.67.5 → 1.67.6 * build: update litellm version * cli/main.py: show_envvar=True * Increase test job timeout to 8 minutes because it looks like maybe the job is getting canceled because it takes too long with the additional tests? This probably could be reverted once #10484 is merged, since that speeds up pytest runs greatly. * Add keys functionality to library/CLI * Add info about keys commands to litellm/proxy/client/cli/README.md * Move Model Information section in CLI README * Make Model Information a level 4 heading * Move rich to extras as suggested by @ishaan-jaff --------- Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com> * pin rich=13.7.1 --------- Co-authored-by: Marc Abramowitz <abramowi@adobe.com> Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
2025-05-05 21:29:57 -07:00
rich = {version = "13.7.1", optional = true}
2025-05-14 17:55:03 -07:00
litellm-enterprise = {version = "0.1.3", optional = true}
2023-11-30 16:23:34 -07:00
[tool.poetry.extras]
proxy = [
2024-01-10 17:47:24 +05:30
"gunicorn",
2023-11-30 16:23:34 -07:00
"uvicorn",
2025-01-10 13:47:18 -08:00
"uvloop",
2023-11-30 16:23:34 -07:00
"fastapi",
"backoff",
"pyyaml",
2023-11-30 19:50:47 -08:00
"rq",
"orjson",
"apscheduler",
"fastapi-sso",
"PyJWT",
"python-multipart",
2024-08-24 09:48:45 -07:00
"cryptography",
2025-03-13 14:38:22 -07:00
"pynacl",
"websockets",
"boto3",
"mcp",
Add `litellm-proxy` CLI (#10478) (#10578) * Add `litellm-proxy` CLI (#10478) * First cut at a Python client module for proxy * Add UnauthorizedError + add_model method * Add delete_model method * Add example model_id to delete_model docstring * Make delete_model raise NotFoundError * Add get_model * Add get_all_model_info * Rename models.list_models to models.list * Rename models.get_all_model_info to models.info * Move ModelsManagementClient.get_all_model_group_info to ModelGroupsManagementClient.info * Rename get_model to get * Rename add_model to new * Rename delete_model to delete * In client classes, rename base_url attribute to _base_url and api_key attribute to _api_key * Add ModelsManagementClient.updae method * Add client.chat.completions (ChatClient) * ruff format litellm/proxy/client * ruff format tests/litellm/proxy/client/*.py * Add latest changes * Rename KeysManagementClient.create to KeysManagementClient.generate * Add new parameters to KeysManagementClient.generate * Add CredentialsManagementClient * Remove api_key parameter from KeysManagementClient.generate * Fix lint errors * Add litellm/proxy/client/README.md * README.md: Remove api_key param to client.keys.generate * Fix mypy errors * First cut at litellm-proxy cli * Add test for `litellm-proxy models list` * Nicer get_models_info * get_models_info: --columns option * Use format_timestamp in list_models * ruff format litellm/proxy/client * Simpler JSON printing with rich.print_json * Move models-related commands to separate file From `cli.py` to `groups/models.py` * Improve directory structure * Cleanup cli/groups/models.py - esp. usage of rich * Refactoring * Refactor mocking in cli/test_main.py * Dedup models commands tests * Update poetry.lock * Fix mypy errors * ruff format litellm/proxy/client/cli * ruff format tests/litellm/proxy/client/*.py * Fix timezone issue in test_models_list_table_format * Add cli/README.md * Small README.md tweaks * README.md enhancements * Add credentials commands * Add chat commands * Add http commands * ruff format litellm/proxy/client/cli * Fix lint errors in credentials and http commands * json => json_lib * test-key => sk-test-key * Mock HTTP responses so http command tests pass * Fix mypy error in credentials.py * bump: version 1.67.5 → 1.67.6 * build: update litellm version * cli/main.py: show_envvar=True * Increase test job timeout to 8 minutes because it looks like maybe the job is getting canceled because it takes too long with the additional tests? This probably could be reverted once #10484 is merged, since that speeds up pytest runs greatly. * Add keys functionality to library/CLI * Add info about keys commands to litellm/proxy/client/cli/README.md * Move Model Information section in CLI README * Make Model Information a level 4 heading * Move rich to extras as suggested by @ishaan-jaff --------- Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com> * pin rich=13.7.1 --------- Co-authored-by: Marc Abramowitz <abramowi@adobe.com> Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
2025-05-05 21:29:57 -07:00
"litellm-proxy-extras",
"litellm-enterprise",
Add `litellm-proxy` CLI (#10478) (#10578) * Add `litellm-proxy` CLI (#10478) * First cut at a Python client module for proxy * Add UnauthorizedError + add_model method * Add delete_model method * Add example model_id to delete_model docstring * Make delete_model raise NotFoundError * Add get_model * Add get_all_model_info * Rename models.list_models to models.list * Rename models.get_all_model_info to models.info * Move ModelsManagementClient.get_all_model_group_info to ModelGroupsManagementClient.info * Rename get_model to get * Rename add_model to new * Rename delete_model to delete * In client classes, rename base_url attribute to _base_url and api_key attribute to _api_key * Add ModelsManagementClient.updae method * Add client.chat.completions (ChatClient) * ruff format litellm/proxy/client * ruff format tests/litellm/proxy/client/*.py * Add latest changes * Rename KeysManagementClient.create to KeysManagementClient.generate * Add new parameters to KeysManagementClient.generate * Add CredentialsManagementClient * Remove api_key parameter from KeysManagementClient.generate * Fix lint errors * Add litellm/proxy/client/README.md * README.md: Remove api_key param to client.keys.generate * Fix mypy errors * First cut at litellm-proxy cli * Add test for `litellm-proxy models list` * Nicer get_models_info * get_models_info: --columns option * Use format_timestamp in list_models * ruff format litellm/proxy/client * Simpler JSON printing with rich.print_json * Move models-related commands to separate file From `cli.py` to `groups/models.py` * Improve directory structure * Cleanup cli/groups/models.py - esp. usage of rich * Refactoring * Refactor mocking in cli/test_main.py * Dedup models commands tests * Update poetry.lock * Fix mypy errors * ruff format litellm/proxy/client/cli * ruff format tests/litellm/proxy/client/*.py * Fix timezone issue in test_models_list_table_format * Add cli/README.md * Small README.md tweaks * README.md enhancements * Add credentials commands * Add chat commands * Add http commands * ruff format litellm/proxy/client/cli * Fix lint errors in credentials and http commands * json => json_lib * test-key => sk-test-key * Mock HTTP responses so http command tests pass * Fix mypy error in credentials.py * bump: version 1.67.5 → 1.67.6 * build: update litellm version * cli/main.py: show_envvar=True * Increase test job timeout to 8 minutes because it looks like maybe the job is getting canceled because it takes too long with the additional tests? This probably could be reverted once #10484 is merged, since that speeds up pytest runs greatly. * Add keys functionality to library/CLI * Add info about keys commands to litellm/proxy/client/cli/README.md * Move Model Information section in CLI README * Make Model Information a level 4 heading * Move rich to extras as suggested by @ishaan-jaff --------- Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com> * pin rich=13.7.1 --------- Co-authored-by: Marc Abramowitz <abramowi@adobe.com> Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
2025-05-05 21:29:57 -07:00
"rich",
2024-01-01 09:23:34 +05:30
]
2023-12-01 19:36:06 -08:00
extra_proxy = [
"prisma",
"azure-identity",
"azure-keyvault-secrets",
"google-cloud-kms",
"resend",
2025-03-24 18:59:32 -07:00
"redisvl"
2023-12-01 19:36:06 -08:00
]
[tool.isort]
profile = "black"
2023-09-26 13:30:35 -07:00
[tool.poetry.scripts]
litellm = 'litellm:run_server'
Add `litellm-proxy` CLI (#10478) (#10578) * Add `litellm-proxy` CLI (#10478) * First cut at a Python client module for proxy * Add UnauthorizedError + add_model method * Add delete_model method * Add example model_id to delete_model docstring * Make delete_model raise NotFoundError * Add get_model * Add get_all_model_info * Rename models.list_models to models.list * Rename models.get_all_model_info to models.info * Move ModelsManagementClient.get_all_model_group_info to ModelGroupsManagementClient.info * Rename get_model to get * Rename add_model to new * Rename delete_model to delete * In client classes, rename base_url attribute to _base_url and api_key attribute to _api_key * Add ModelsManagementClient.updae method * Add client.chat.completions (ChatClient) * ruff format litellm/proxy/client * ruff format tests/litellm/proxy/client/*.py * Add latest changes * Rename KeysManagementClient.create to KeysManagementClient.generate * Add new parameters to KeysManagementClient.generate * Add CredentialsManagementClient * Remove api_key parameter from KeysManagementClient.generate * Fix lint errors * Add litellm/proxy/client/README.md * README.md: Remove api_key param to client.keys.generate * Fix mypy errors * First cut at litellm-proxy cli * Add test for `litellm-proxy models list` * Nicer get_models_info * get_models_info: --columns option * Use format_timestamp in list_models * ruff format litellm/proxy/client * Simpler JSON printing with rich.print_json * Move models-related commands to separate file From `cli.py` to `groups/models.py` * Improve directory structure * Cleanup cli/groups/models.py - esp. usage of rich * Refactoring * Refactor mocking in cli/test_main.py * Dedup models commands tests * Update poetry.lock * Fix mypy errors * ruff format litellm/proxy/client/cli * ruff format tests/litellm/proxy/client/*.py * Fix timezone issue in test_models_list_table_format * Add cli/README.md * Small README.md tweaks * README.md enhancements * Add credentials commands * Add chat commands * Add http commands * ruff format litellm/proxy/client/cli * Fix lint errors in credentials and http commands * json => json_lib * test-key => sk-test-key * Mock HTTP responses so http command tests pass * Fix mypy error in credentials.py * bump: version 1.67.5 → 1.67.6 * build: update litellm version * cli/main.py: show_envvar=True * Increase test job timeout to 8 minutes because it looks like maybe the job is getting canceled because it takes too long with the additional tests? This probably could be reverted once #10484 is merged, since that speeds up pytest runs greatly. * Add keys functionality to library/CLI * Add info about keys commands to litellm/proxy/client/cli/README.md * Move Model Information section in CLI README * Make Model Information a level 4 heading * Move rich to extras as suggested by @ishaan-jaff --------- Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com> * pin rich=13.7.1 --------- Co-authored-by: Marc Abramowitz <abramowi@adobe.com> Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
2025-05-05 21:29:57 -07:00
litellm-proxy = 'litellm.proxy.client.cli:cli'
2023-12-22 08:14:54 +01:00
[tool.poetry.group.dev.dependencies]
flake8 = "^6.1.0"
2023-12-22 08:38:19 +01:00
black = "^23.12.0"
2024-06-11 14:43:32 -07:00
mypy = "^1.0"
2023-12-22 09:25:57 +01:00
pytest = "^7.4.3"
pytest-mock = "^3.12.0"
2025-03-18 22:05:26 -04:00
pytest-asyncio = "^0.21.1"
requests-mock = "^1.12.1"
responses = "^0.25.7"
respx = "^0.22.0"
ruff = "^0.1.0"
types-requests = "*"
types-setuptools = "*"
types-redis = "*"
types-PyYAML = "*"
opentelemetry-api = "1.25.0"
opentelemetry-sdk = "1.25.0"
opentelemetry-exporter-otlp = "1.25.0"
[tool.poetry.group.proxy-dev.dependencies]
prisma = "0.11.0"
hypercorn = "^0.15.0"
2025-04-04 22:28:45 -07:00
prometheus-client = "0.20.0"
opentelemetry-api = "1.25.0"
opentelemetry-sdk = "1.25.0"
opentelemetry-exporter-otlp = "1.25.0"
2023-12-22 08:14:54 +01:00
[build-system]
requires = ["poetry-core", "wheel"]
build-backend = "poetry.core.masonry.api"
2023-10-05 22:15:23 -07:00
[tool.commitizen]
2025-05-13 19:55:53 -07:00
version = "1.69.3"
version_files = [
"pyproject.toml:^version"
]
2023-10-05 22:15:23 -07:00
[tool.mypy]
plugins = "pydantic.mypy"