SIGN IN SIGN UP

Integrate cutting-edge LLM technology quickly and easily into your apps

0 0 103 C#
2023-03-16 19:54:34 -07:00
[tool.poetry]
name = "semantic-kernel"
version = "1.3.0"
Python: pysdk issue 3897 - update notebooks (#4177) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> https://github.com/microsoft/semantic-kernel/issues/3897 In this PR, I updated the notebooks to use the latest version of the python SDK. To make the notebooks work, I need to: 1. Use keyword arguments 2. Change the default value in the `ContextVariables` from `int/float` to `str` 3. Update `TextMemorySkill` 4. Update some inputs, such as `ChatMessage` I have noted some inconsistency issues in the https://github.com/microsoft/semantic-kernel/issues/3897 ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile: --------- Co-authored-by: Huijing Huang <huijinghuang@microsoft.com> Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2023-12-14 08:38:36 -08:00
description = "Semantic Kernel Python SDK"
2023-03-16 19:54:34 -07:00
authors = ["Microsoft <SK-Support@microsoft.com>"]
readme = "pip/README.md"
2023-03-16 19:54:34 -07:00
packages = [{include = "semantic_kernel"}]
[tool.poetry.dependencies]
Python: drop support for python before 3.10 (#5947) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> This PR drops support for python 3.8 and 3.9. Python 3.8 will go out of support in October 2024 and people should not be building on top of that now. Python 3.10 introduces the new style of typing and for KernelFunctionFromMethod that is a crucial feature and maintaining that with backward compatibility is a lot of work and limits us, hence we will also drop support for 3.9. Closes #5628 ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> Changed pyproject to reflect this Changed CICD to only run on 3.10, 3.11 and 3.12. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [ ] I didn't break anyone :smile:
2024-04-22 13:42:49 +02:00
python = "^3.10,<3.13"
# main dependencies
Python: Make SK compatible with OpenAI 1.0 (#3555) Need to update SK Python to integrate the breaking changes from openai-python 1.0 https://github.com/microsoft/semantic-kernel/issues/3330 This PR replaces the old one that was merging to main https://github.com/microsoft/semantic-kernel/pull/3417 --------- ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [ ] The code builds clean without any errors or warnings - [ ] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [ ] All unit tests pass, and I have added new tests where possible - [ ] I didn't break anyone :smile: --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Joowon <joowon.kim@dm.snu.ac.kr> Co-authored-by: Roger Barreto <19890735+RogerBarreto@users.noreply.github.com> Co-authored-by: Dmytro Struk <13853051+dmytrostruk@users.noreply.github.com> Co-authored-by: Mark Wallace <127216156+markwallace-microsoft@users.noreply.github.com> Co-authored-by: Ben Thomas <ben.thomas@microsoft.com> Co-authored-by: Ben Thomas <bentho@microsoft.com> Co-authored-by: Devis Lucato <dluc@users.noreply.github.com> Co-authored-by: Aayush Kataria <aayushkataria3011@gmail.com> Co-authored-by: Abby Harrison <abby.harrison@microsoft.com> Co-authored-by: Abby Harrison <54643756+awharrison-28@users.noreply.github.com> Co-authored-by: Stephen Toub <stoub@microsoft.com> Co-authored-by: Teresa Hoang <125500434+teresaqhoang@users.noreply.github.com> Co-authored-by: Chris <66376200+crickman@users.noreply.github.com> Co-authored-by: Gil LaHaye <gillahaye@microsoft.com> Co-authored-by: Gina Triolo <51341242+gitri-ms@users.noreply.github.com> Co-authored-by: Tao Chen <TaoChenOSU@users.noreply.github.com> Co-authored-by: Lee Miller <lemiller@microsoft.com> Co-authored-by: Jennifer Marsman <jennifermarsman@users.noreply.github.com> Co-authored-by: Weihan Li <weihanli@outlook.com> Co-authored-by: SergeyMenshykh <68852919+SergeyMenshykh@users.noreply.github.com> Co-authored-by: Diego Colombo <dicolomb@microsoft.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Alex Chao <achao@achao> Co-authored-by: Eduard van Valkenburg <eavanvalkenburg@users.noreply.github.com> Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com> Co-authored-by: Evan Mattson <evan.mattson@microsoft.com>
2023-11-28 22:44:10 -08:00
aiohttp = "^3.8"
pydantic = "^2"
pydantic-settings = "^2"
defusedxml = "^0.7.1"
# embeddings
2024-03-12 10:27:55 +01:00
numpy = [
{ version = ">=1.25", python = "<3.12" },
2024-03-12 10:27:55 +01:00
{ version = ">=1.26", python = ">=3.12" },
]
# openai connector
Python: Make SK compatible with OpenAI 1.0 (#3555) Need to update SK Python to integrate the breaking changes from openai-python 1.0 https://github.com/microsoft/semantic-kernel/issues/3330 This PR replaces the old one that was merging to main https://github.com/microsoft/semantic-kernel/pull/3417 --------- ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [ ] The code builds clean without any errors or warnings - [ ] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [ ] All unit tests pass, and I have added new tests where possible - [ ] I didn't break anyone :smile: --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Joowon <joowon.kim@dm.snu.ac.kr> Co-authored-by: Roger Barreto <19890735+RogerBarreto@users.noreply.github.com> Co-authored-by: Dmytro Struk <13853051+dmytrostruk@users.noreply.github.com> Co-authored-by: Mark Wallace <127216156+markwallace-microsoft@users.noreply.github.com> Co-authored-by: Ben Thomas <ben.thomas@microsoft.com> Co-authored-by: Ben Thomas <bentho@microsoft.com> Co-authored-by: Devis Lucato <dluc@users.noreply.github.com> Co-authored-by: Aayush Kataria <aayushkataria3011@gmail.com> Co-authored-by: Abby Harrison <abby.harrison@microsoft.com> Co-authored-by: Abby Harrison <54643756+awharrison-28@users.noreply.github.com> Co-authored-by: Stephen Toub <stoub@microsoft.com> Co-authored-by: Teresa Hoang <125500434+teresaqhoang@users.noreply.github.com> Co-authored-by: Chris <66376200+crickman@users.noreply.github.com> Co-authored-by: Gil LaHaye <gillahaye@microsoft.com> Co-authored-by: Gina Triolo <51341242+gitri-ms@users.noreply.github.com> Co-authored-by: Tao Chen <TaoChenOSU@users.noreply.github.com> Co-authored-by: Lee Miller <lemiller@microsoft.com> Co-authored-by: Jennifer Marsman <jennifermarsman@users.noreply.github.com> Co-authored-by: Weihan Li <weihanli@outlook.com> Co-authored-by: SergeyMenshykh <68852919+SergeyMenshykh@users.noreply.github.com> Co-authored-by: Diego Colombo <dicolomb@microsoft.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Alex Chao <achao@achao> Co-authored-by: Eduard van Valkenburg <eavanvalkenburg@users.noreply.github.com> Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com> Co-authored-by: Evan Mattson <evan.mattson@microsoft.com>
2023-11-28 22:44:10 -08:00
openai = ">=1.0"
# openapi and swagger
Python: Bump openapi-core from 0.18.2 to 0.19.0 in /python (#5306) Bumps [openapi-core](https://github.com/python-openapi/openapi-core) from 0.18.2 to 0.19.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/python-openapi/openapi-core/releases">openapi-core's releases</a>.</em></p> <blockquote> <h2>0.19.0</h2> <p>This version focuses on OpenAPI app and support for binary requests and responses.</p> <h2>Features</h2> <ul> <li>FastAPI integration <a href="https://redirect.github.com/python-openapi/openapi-core/issues/738">#738</a></li> <li>Mimetype parameters (i.e. charset) handling <a href="https://redirect.github.com/python-openapi/openapi-core/issues/678">#678</a></li> <li>Parameter deserializers renamed to Style deserializers <a href="https://redirect.github.com/python-openapi/openapi-core/issues/676">#676</a></li> <li>Unmarshalling processor enhancement <a href="https://redirect.github.com/python-openapi/openapi-core/issues/625">#625</a> <ul> <li>Option to skip response validation in Django, Falcon and Flask integrations <a href="https://redirect.github.com/python-openapi/openapi-core/issues/667">#667</a></li> </ul> </li> <li>use explicit arguments (instead of kwargs) in Spec.from_dict and add short note in documentation how to use base_url for Spec.from_dict</li> <li>Parameter and header get value refactor <a href="https://redirect.github.com/python-openapi/openapi-core/issues/677">#677</a></li> <li>Python 3.12 support <a href="https://redirect.github.com/python-openapi/openapi-core/issues/684">#684</a></li> <li>Bump openapi-spec-validator from 0.6.0 to 0.7.0 <a href="https://redirect.github.com/python-openapi/openapi-core/issues/685">#685</a> <ul> <li>Use openapi-spec-validator spec version finder <a href="https://redirect.github.com/python-openapi/openapi-core/issues/691">#691</a></li> </ul> </li> <li>Move to <code>SchemaPath</code> from jsonschema-path package <a href="https://redirect.github.com/python-openapi/openapi-core/issues/690">#690</a></li> <li>Specification validation as part of shortcuts <a href="https://redirect.github.com/python-openapi/openapi-core/issues/686">#686</a></li> <li>Style deserializing reimplementation with support for all styles <a href="https://redirect.github.com/python-openapi/openapi-core/issues/694">#694</a></li> <li>Media type encoding support <a href="https://redirect.github.com/python-openapi/openapi-core/issues/646">#646</a></li> <li>Replace <code>mimetype</code> with <code>content_type</code> to include content parameters <a href="https://redirect.github.com/python-openapi/openapi-core/issues/699">#699</a></li> <li>Suport for primitive properties casting of urlencoded objects. <a href="https://redirect.github.com/python-openapi/openapi-core/issues/701">#701</a></li> <li>Request response binary format support <a href="https://redirect.github.com/python-openapi/openapi-core/issues/710">#710</a></li> <li>Starlette middleware <a href="https://redirect.github.com/python-openapi/openapi-core/issues/680">#680</a></li> <li>OpenAPI app and high level integration <a href="https://redirect.github.com/python-openapi/openapi-core/issues/716">#716</a></li> </ul> <h2>Bug fixes</h2> <ul> <li>aiohttp request host_url include scheme <a href="https://redirect.github.com/python-openapi/openapi-core/issues/673">#673</a></li> <li>aiohttp response body check none <a href="https://redirect.github.com/python-openapi/openapi-core/issues/674">#674</a></li> <li>Validate empty request body fix <a href="https://redirect.github.com/python-openapi/openapi-core/issues/713">#713</a></li> <li>Path finder returns default server <a href="https://redirect.github.com/python-openapi/openapi-core/issues/648">#648</a></li> <li>OpenAPI config passed to validators and unmarshallers fix <a href="https://redirect.github.com/python-openapi/openapi-core/issues/779">#779</a></li> <li>milti types schema format unmarshal fix <a href="https://redirect.github.com/python-openapi/openapi-core/issues/562">#562</a></li> </ul> <h2>Deprecations</h2> <ul> <li><code>Spec</code> class is deprecated. Use <code>SchemaPath</code> from jsonschema-path package.</li> </ul> <h2>Breaking changes</h2> <ul> <li><code>request_class</code>/<code>response_class</code> renamed to <code>request_cls</code>/<code>response_cls</code> in unmarshalling processors (Django, Falcon and Flask integrations) <a href="https://redirect.github.com/python-openapi/openapi-core/issues/667">#667</a></li> <li><code>ParameterDeserializersFactory</code> renamed to <code>StyleDeserializersFactory</code> <a href="https://redirect.github.com/python-openapi/openapi-core/issues/676">#676</a></li> <li>unmarshalling byte and binary formats return bytes <a href="https://redirect.github.com/python-openapi/openapi-core/issues/647">#647</a></li> <li>Specification validation is no longer part of <code>Spec</code> object creation and moved to be part of <code>OpenAPI</code> object creation. <a href="https://redirect.github.com/python-openapi/openapi-core/issues/686">#686</a> <a href="https://redirect.github.com/python-openapi/openapi-core/issues/716">#716</a></li> <li><code>Request</code> and <code>Response</code> protocols' <code>mimetype</code> attribute replaced with <code>content_type</code> <a href="https://redirect.github.com/python-openapi/openapi-core/issues/699">#699</a></li> <li><code>Request</code> protocol's <code>body</code> attribute returns bytes instead of str <a href="https://redirect.github.com/python-openapi/openapi-core/issues/710">#710</a></li> <li><code>Response</code> protocol's <code>data</code> attribute returns bytes instead of str <a href="https://redirect.github.com/python-openapi/openapi-core/issues/710">#710</a></li> <li>Unmarshalling no longer raises <code>FormatUnmarshalError</code></li> </ul> <h2>0.19.0a2</h2> <p>This version focuses on OpenAPI app and support for binary requests and responses.</p> <h2>Bug fixes</h2> <ul> <li>Path finder returns default server <a href="https://redirect.github.com/python-openapi/openapi-core/issues/648">#648</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/python-openapi/openapi-core/commit/46d94e978a4d7d4eb72b86b76a2d23574b7c50d3"><code>46d94e9</code></a> Version 0.19.0</li> <li><a href="https://github.com/python-openapi/openapi-core/commit/322198326d838aabe8dd5fb2d45ec283246e393a"><code>3221983</code></a> FastAPI docs formatting fix</li> <li><a href="https://github.com/python-openapi/openapi-core/commit/fcddf6b645a9dab2ebc9e1817e2d20007cb3989e"><code>fcddf6b</code></a> Merge pull request <a href="https://redirect.github.com/python-openapi/openapi-core/issues/738">#738</a> from python-openapi/feature/fastapi-integration</li> <li><a href="https://github.com/python-openapi/openapi-core/commit/383d0974d4c90f0dbb5777bac09045ea8ae1c91b"><code>383d097</code></a> Merge pull request <a href="https://redirect.github.com/python-openapi/openapi-core/issues/783">#783</a> from python-openapi/dependabot/pip/jsonschema-4.21.1</li> <li><a href="https://github.com/python-openapi/openapi-core/commit/38e67d35dde0e525d4c34f0858498ad02b93e5e9"><code>38e67d3</code></a> Merge pull request <a href="https://redirect.github.com/python-openapi/openapi-core/issues/784">#784</a> from python-openapi/dependabot/pip/black-24.2.0</li> <li><a href="https://github.com/python-openapi/openapi-core/commit/f732b16db6c2c39b7724790e7328c9c5ef7785b3"><code>f732b16</code></a> Merge pull request <a href="https://redirect.github.com/python-openapi/openapi-core/issues/785">#785</a> from python-openapi/dependabot/pip/parse-1.20.1</li> <li><a href="https://github.com/python-openapi/openapi-core/commit/9d31ee8bd21b1230bdfde780768fa6eeacfd4d41"><code>9d31ee8</code></a> Merge pull request <a href="https://redirect.github.com/python-openapi/openapi-core/issues/786">#786</a> from python-openapi/dependabot/pip/python-multipart-0...</li> <li><a href="https://github.com/python-openapi/openapi-core/commit/89836a0699fa1b2e2b4b54a3dabbd0cd51937df4"><code>89836a0</code></a> Bump python-multipart from 0.0.6 to 0.0.9</li> <li><a href="https://github.com/python-openapi/openapi-core/commit/dab8ff9df1e62ccb3b5bdbba4ac96b4c6c5f3d53"><code>dab8ff9</code></a> Bump parse from 1.20.0 to 1.20.1</li> <li><a href="https://github.com/python-openapi/openapi-core/commit/80befcaa2ecc6fff7aa0c25f930defdb5de88f12"><code>80befca</code></a> Bump black from 24.1.1 to 24.2.0</li> <li>Additional commits viewable in <a href="https://github.com/python-openapi/openapi-core/compare/0.18.2...0.19.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=openapi-core&package-manager=pip&previous-version=0.18.2&new-version=0.19.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com> Co-authored-by: Eduard van Valkenburg <eavanvalkenburg@users.noreply.github.com>
2024-03-13 21:31:41 +00:00
openapi_core = ">=0.18,<0.20"
# OpenTelemetry
opentelemetry-api = "^1.24.0"
opentelemetry-sdk = "^1.24.0"
Python: Import OpenAPI documents into the semantic kernel (#2297) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> This allows us to import OpenAPI documents, including ChatGPT plugins, into the Semantic Kernel. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> - The interface reads the operationIds of the openapi spec into a skill: ```python from semantic_kernel.connectors.openapi import register_openapi_skill skill = register_openapi_skill(kernel=kernel, skill_name="test", openapi_document="url/or/path/to/openapi.yamlorjson") skill['operationId'].invoke_async() ``` - Parse an OpenAPI document - For each operation in the document, create a function that will execute the operation - Add all those operations to a skill in the kernel - Modified `import_skill` to accept a dictionary of functions instead of just class so that we can import dynamically created functions - Created unit tests TESTING: I've been testing this with the following ChatGPT plugins: - [Semantic Kernel Starter's Python Flask plugin](https://github.com/microsoft/semantic-kernel-starters/tree/main/sk-python-flask) - [ChatGPT's example retrieval plugin](https://github.com/openai/chatgpt-retrieval-plugin/blob/main/docs/providers/azuresearch/setup.md) - This one was annoying to setup. I didn't get the plugin functioning, but I was able to send the right API requests - Also, their openapi file was invalid. The "servers" attribute is misindented - [Google ChatGPT plugin](https://github.com/Sogody/google-chatgpt-plugin) - [Chat TODO plugin](https://github.com/lencx/chat-todo-plugin) - This openapi file is also invalid. I checked with an online validator. I had to remove"required" from the referenced request objects' properties: https://github.com/lencx/chat-todo-plugin/blob/main/openapi.yaml#L85 Then I used this python file to test the examples: ```python import asyncio import logging import semantic_kernel as sk from semantic_kernel import ContextVariables, Kernel from semantic_kernel.connectors.ai.open_ai import AzureTextCompletion from semantic_kernel.connectors.openapi.sk_openapi import register_openapi_skill # Example usage chatgpt_retrieval_plugin = { "openapi": # location of the plugin's openapi.yaml file, "payload": { "queries": [ { "query": "string", "filter": { "document_id": "string", "source": "email", "source_id": "string", "author": "string", "start_date": "string", "end_date": "string", }, "top_k": 3, } ] }, "operation_id": "query_query_post", } sk_python_flask = { "openapi": # location of the plugin's openapi.yaml file, "path_params": {"skill_name": "FunSkill", "function_name": "Joke"}, "payload": {"input": "dinosaurs"}, "operation_id": "executeFunction", } google_chatgpt_plugin = { "openapi": # location of the plugin's openapi.yaml file, "query_params": {"q": "dinosaurs"}, "operation_id": "searchGet", } todo_plugin_add = { "openapi": # location of the plugin's openapi.yaml file, "path_params": {"username": "markkarle"}, "payload": {"todo": "finish this"}, "operation_id": "addTodo", } todo_plugin_get = { "openapi": # location of the plugin's openapi.yaml file, "path_params": {"username": "markkarle"}, "operation_id": "getTodos", } todo_plugin_delete = { "openapi": # location of the plugin's openapi.yaml file, "path_params": {"username": "markkarle"}, "payload": {"todo_idx": 0}, "operation_id": "deleteTodo", } plugin = todo_plugin_get # set this to the plugin you want to try logger = logging.getLogger(__name__) logger.addHandler(logging.StreamHandler()) logger.setLevel(logging.DEBUG) kernel = Kernel(log=logger) deployment, api_key, endpoint = sk.azure_openai_settings_from_dot_env() kernel.add_text_completion_service( "dv", AzureTextCompletion(deployment, endpoint, api_key) ) skill = register_openapi_skill( kernel=kernel, skill_name="test", openapi_document=plugin["openapi"] ) context_variables = ContextVariables(variables=plugin) result = asyncio.run( skill[plugin["operation_id"]].invoke_async(variables=context_variables) ) print(result) ``` ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [ ] The code builds clean without any errors or warnings - [ ] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [ ] All unit tests pass, and I have added new tests where possible - [ ] I didn't break anyone :smile: --------- Co-authored-by: Abby Harrison <abharris@microsoft.com>
2023-08-04 10:48:21 -07:00
prance = "^23.6.21.0"
# templating
Python: Handlebars template support (#5466) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Introducing Handlebars templating to the Python Semantic Kernel! Closes: #5446 Prework for: #4641 ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> Adds HandlebarsPromptTemplate as a subclass of PromptTemplateBase Adds second literal to the PromptTemplateConfig definition Adds helpers for Handlebars (system_helpers) and `create_helper_from_function` function Moves fully_qualified_name into KernelFunctionMetadata instead of in different extensions. Defaults are still 'semantic-kernel' templates, and without making explicit changes that still is the norm. Adds a azure_chat_gpt_handlebars.py sample ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile:
2024-03-14 15:00:05 +01:00
pybars4 = "^0.9.13"
jinja2 = "^3.1.3"
Python: Enhanced pre commit and tasks (#5512) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> To further make the SK python robust and make it easier to get high-quality PRs, some work on the pre-commit-config.yaml and adding a mypy settings ini. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> Added two steps to the pre-commit-config, the first for pypy, the second for tests with coverage. Over time we want to mandate these checks to run against a PR before it goes to github, that will also reduce the number of ruff and black fix commits, and non-passing unit tests. Added a mypy.ini, currently all but the root folder is excluded, so that we can gradually introduce mypy coverage, did the first pieces in kernel.py, including switch to new style annotations (using from `__future__ import annotations`) in Kernel. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile:
2024-03-19 21:20:09 +01:00
nest-asyncio = "^1.6.0"
2023-03-16 19:54:34 -07:00
### Optional dependencies
# azure
azure-ai-inference = {version = "^1.0.0b1", allow-prereleases = true, optional = true}
azure-search-documents = {version = "11.6.0b4", allow-prereleases = true, optional = true}
azure-core = { version = "^1.28.0", optional = true}
azure-identity = { version = "^1.13.0", optional = true}
azure-cosmos = { version = "^4.7.0", optional = true}
# chroma
chromadb = { version = ">=0.4.13,<0.6.0", optional = true}
Python: Google AI Connector (#7419) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> We are adding a new AI connector for the talking to the Gemini API on Google AI: https://ai.google.dev/gemini-api/docs/get-started/tutorial?authuser=1&lang=python. Note that this is for Gemini hosted on Google AI. Google also offers Gemini access on their Vertex AI platform: https://cloud.google.com/vertex-ai/generative-ai/docs/migrate/migrate-google-ai. We will have another connector along with this for accessing Gemini on Vertex AI. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> The new connector contains 3 AI services: - Chat completion - Text completion - Text embedding TODO: Function calling. > Function calling is not included in this PR to reduce to the size of the PR. Unit test coverage: ![image](https://github.com/user-attachments/assets/d4fefaa3-b24a-4b91-b29a-cadec5fa34fb) Mypy: All cleared ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile:
2024-07-24 16:20:36 -07:00
# google
Python: Vertex AI Connector (#7481) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> We are adding a new AI connector for the talking to the Gemini API on Google Vertex AI: https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/overview Note that this is for Gemini hosted on Vertex AI. Google also offers Gemini access on their Google AI platform: https://cloud.google.com/vertex-ai/generative-ai/docs/migrate/migrate-google-ai. We will have another connector along with this for accessing Gemini on Google AI. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> The new connector contains 3 AI services: - Chat completion - Text completion - Text embedding TODO: Function calling. > Function calling is not included in this PR to reduce to the size of the PR. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile: --------- Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2024-07-31 09:04:10 -07:00
google-cloud-aiplatform = { version = "^1.60.0", optional = true}
Python: Google AI Connector (#7419) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> We are adding a new AI connector for the talking to the Gemini API on Google AI: https://ai.google.dev/gemini-api/docs/get-started/tutorial?authuser=1&lang=python. Note that this is for Gemini hosted on Google AI. Google also offers Gemini access on their Vertex AI platform: https://cloud.google.com/vertex-ai/generative-ai/docs/migrate/migrate-google-ai. We will have another connector along with this for accessing Gemini on Vertex AI. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> The new connector contains 3 AI services: - Chat completion - Text completion - Text embedding TODO: Function calling. > Function calling is not included in this PR to reduce to the size of the PR. Unit test coverage: ![image](https://github.com/user-attachments/assets/d4fefaa3-b24a-4b91-b29a-cadec5fa34fb) Mypy: All cleared ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile:
2024-07-24 16:20:36 -07:00
google-generativeai = { version = "^0.7.2", optional = true}
# hugging face
transformers = { version = "^4.28.1", extras=["torch"], optional = true}
sentence-transformers = { version = "^2.2.2", optional = true}
# mongo
motor = { version = "^3.3.2", optional = true }
# notebooks
ipykernel = { version = "^6.21.1", optional = true}
# milvus
Python: Bump pymilvus from 2.3.7 to 2.4.3 in /python (#6516) Bumps [pymilvus](https://github.com/milvus-io/pymilvus) from 2.3.7 to 2.4.3. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/milvus-io/pymilvus/releases">pymilvus's releases</a>.</em></p> <blockquote> <h2>PyMilvus 2.4.3 ReleaseNotes</h2> <h2>What's Changed</h2> <ul> <li>support the report value in the dml and dql request by <a href="https://github.com/SimFG"><code>@​SimFG</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2092">milvus-io/pymilvus#2092</a></li> <li>enhance: Expand grpcio version to latest by <a href="https://github.com/XuanYang-cn"><code>@​XuanYang-cn</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2096">milvus-io/pymilvus#2096</a></li> </ul> <h2>Bug fixes:</h2> <ul> <li>fix sparse: accpet int/float wrapped in string by <a href="https://github.com/zhengbuqian"><code>@​zhengbuqian</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2095">milvus-io/pymilvus#2095</a></li> <li>fix the str function of the extra list by <a href="https://github.com/SimFG"><code>@​SimFG</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2098">milvus-io/pymilvus#2098</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/milvus-io/pymilvus/compare/v2.4.2...v2.4.3">https://github.com/milvus-io/pymilvus/compare/v2.4.2...v2.4.3</a></p> <h2>PyMilvus 2.4.2 Release Notes</h2> <h2>What's Changed</h2> <ul> <li>Support milvuslite by <a href="https://github.com/junjiejiangjjj"><code>@​junjiejiangjjj</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2075">milvus-io/pymilvus#2075</a></li> <li>Make bulk_writer's requirments optional (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2086">#2086</a>) by <a href="https://github.com/XuanYang-cn"><code>@​XuanYang-cn</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2087">milvus-io/pymilvus#2087</a></li> <li>enhance: Enable set_properties and describe_database api by <a href="https://github.com/weiliu1031"><code>@​weiliu1031</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2084">milvus-io/pymilvus#2084</a></li> </ul> <h2>Bug fixes</h2> <ul> <li>fix: Remove params for property vars by <a href="https://github.com/XuanYang-cn"><code>@​XuanYang-cn</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2069">milvus-io/pymilvus#2069</a></li> <li>change sparse related errors to ParamError by <a href="https://github.com/zhengbuqian"><code>@​zhengbuqian</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2070">milvus-io/pymilvus#2070</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/milvus-io/pymilvus/compare/v2.4.1...v2.4.2">https://github.com/milvus-io/pymilvus/compare/v2.4.1...v2.4.2</a></p> <h2>PyMilvus 2.4.1 Release Notes</h2> <h2>What's Changed</h2> <h3>Bug fixes</h3> <ul> <li>Fix float16_example and bfloat16_example by <a href="https://github.com/czs007"><code>@​czs007</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/1993">milvus-io/pymilvus#1993</a></li> <li>Use wrong placeholder type for bf16 and float16 by <a href="https://github.com/XuanYang-cn"><code>@​XuanYang-cn</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2012">milvus-io/pymilvus#2012</a></li> <li>Restrict input/search type for vector fields (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2025">#2025</a>) by <a href="https://github.com/XuanYang-cn"><code>@​XuanYang-cn</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2029">milvus-io/pymilvus#2029</a></li> <li>Fix import array via bulkwriter (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2035">#2035</a>) by <a href="https://github.com/bigsheeper"><code>@​bigsheeper</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2036">milvus-io/pymilvus#2036</a></li> <li>Pass offset parameter for hybrid search to server by <a href="https://github.com/czs007"><code>@​czs007</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2052">milvus-io/pymilvus#2052</a></li> </ul> <h3>Enhancements</h3> <ul> <li>Add warning about gpu example by <a href="https://github.com/czs007"><code>@​czs007</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/1994">milvus-io/pymilvus#1994</a></li> <li>Update sparse+dense hybrid search example by <a href="https://github.com/zhengbuqian"><code>@​zhengbuqian</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2014">milvus-io/pymilvus#2014</a></li> <li>Convert params' value from string to int by <a href="https://github.com/PowderLi"><code>@​PowderLi</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2026">milvus-io/pymilvus#2026</a></li> <li>Support UDS by <a href="https://github.com/junjiejiangjjj"><code>@​junjiejiangjjj</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2030">milvus-io/pymilvus#2030</a></li> <li>Update hello_hybrid_sparse_dense.py example to include BGE reranker by <a href="https://github.com/zhengbuqian"><code>@​zhengbuqian</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2032">milvus-io/pymilvus#2032</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/milvus-io/pymilvus/compare/v2.4.0...v2.4.1">https://github.com/milvus-io/pymilvus/compare/v2.4.0...v2.4.1</a></p> <h2>PyMilvus 2.4.0 Release Notes</h2> <h3>Milvus 2.4.0-rc.1 related scenario-based features:</h3> <p>Milvus has released version 2.4.0-rc.1. This version of pymilvus incorporates the new features introduced in Milvus 2.4.0-rc.1</p> <ul> <li><strong>New GPU Index named <a href="https://docs.rapids.ai/api/raft/nightly/cpp_api/neighbors_cagra/">CAGRA</a></strong> Thanks to NVIDIA's contribution, this new GPU index provides a 10x performance boost, especially for batch searches. Please refer to the sample code in <code>examples/example_gpu_cagra.py</code></li> <li><strong>Multi-vector and hybrid search</strong> This feature helps store vector embeddings generated from multiple models and conduct multi-vector searches accordingly. Please refer to the sample code in <code>examples/hello_hybrid_sparse_dense.py</code></li> <li><strong>Sparse vectors</strong></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/milvus-io/pymilvus/commit/4351db05927c4a7f15a15be3ddc63a8c447a4994"><code>4351db0</code></a> [2.4] fix the str function of the extra list (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2098">#2098</a>)</li> <li><a href="https://github.com/milvus-io/pymilvus/commit/75200878e4bac5e34cb3882a9aedd023f14cb380"><code>7520087</code></a> enhance: [2.4]Expand grpcio version to latest (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2096">#2096</a>)</li> <li><a href="https://github.com/milvus-io/pymilvus/commit/0979edfd44e2c2a72b602e3edc61e1dcd81bd513"><code>0979edf</code></a> [2.4] fix sparse: accpet int/float wrapped in string (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2095">#2095</a>)</li> <li><a href="https://github.com/milvus-io/pymilvus/commit/c19be3caf64106ab1603cccdf974074ccf6ef3e8"><code>c19be3c</code></a> [2.4] support the report value in the dml and dql request (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2092">#2092</a>)</li> <li><a href="https://github.com/milvus-io/pymilvus/commit/5b5927221cc30a729902b739f1ffe0200e7868e0"><code>5b59272</code></a> fix: Revert of dup cherry-pick <a href="https://redirect.github.com/milvus-io/pymilvus/issues/1949">#1949</a> (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2088">#2088</a>)</li> <li><a href="https://github.com/milvus-io/pymilvus/commit/af202c5472d3fd480b6728580245eb854b7a50e5"><code>af202c5</code></a> enhance: [2.4]Make bulk_writer's requirments optional (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2086">#2086</a>) (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2087">#2087</a>)</li> <li><a href="https://github.com/milvus-io/pymilvus/commit/9fd084f2e7e4689d8f31fb73ceef38212d08eb4b"><code>9fd084f</code></a> enhance: Cherry pick <a href="https://redirect.github.com/milvus-io/pymilvus/issues/1949">#1949</a>, <a href="https://redirect.github.com/milvus-io/pymilvus/issues/2065">#2065</a>, <a href="https://redirect.github.com/milvus-io/pymilvus/issues/2077">#2077</a> (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2085">#2085</a>)</li> <li><a href="https://github.com/milvus-io/pymilvus/commit/e8f4a136992687970f6b29264d67d11f0b712da8"><code>e8f4a13</code></a> enhance: Enable set_properties and describe_database api (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2084">#2084</a>)</li> <li><a href="https://github.com/milvus-io/pymilvus/commit/5598667510e04ceb2dc5822626158bdad8f32aed"><code>5598667</code></a> [2.4] remove scipy dependency for sparse while still supporting it (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2076">#2076</a>)</li> <li><a href="https://github.com/milvus-io/pymilvus/commit/177a789c898cd60bd5218fb4294eb7cc06b73197"><code>177a789</code></a> [2.4]Support milvuslite (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2075">#2075</a>)</li> <li>Additional commits viewable in <a href="https://github.com/milvus-io/pymilvus/compare/v2.3.7...v2.4.3">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pymilvus&package-manager=pip&previous-version=2.3.7&new-version=2.4.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-13 15:36:19 -04:00
pymilvus = { version = ">=2.3,<2.4.4", optional = true}
milvus = { version = ">=2.3,<2.3.8", markers = 'sys_platform != "win32"', optional = true}
Python: #6499 Mistral AI Chat Completion (#7049) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> 1. Why is this changed required ? To enable Mistral Models with Semantic Kernel, there was the issue #6499 in the Backlog to add a MistralAI Connector 2. What problem does it solve ? It solves the problem, that semantic kernel is not yet integrated with MistralAI 3. What scenario does it contribute to? The scenario is to use different connector than HF,OpenAI or AzureOpenAI. When User's want to use Mistral they can easliy integrate it now 4. If it fixes an open issue, please link to the issue here. #6499 ### Description The changes made are designed by the open_ai connector, i tried to stay as close as possible to the structure. For the integration i installed the mistral python package in the repository. I added the following Classes : - MistrealAIChatPromptExcecutionSettings --> Responsible to administrate the Prompt Execution against MistralAI - MistralAIChatCompletion --> Responsible to coordinate the Classes and for Content parsing - MistralAISettings --> Basic Settings to work with the MistralAIClient **To test the changes with the tests please add MISTRALAI_API_KEY and MISTRALAI_CHAT_MODEL_ID as Enviorment Variable** From a design decision i moved the processing of Functions from Connectors to the ChatCompletionClientBaseClass What is integrated yet : - [X] Integrate Mistral AI ChatCompletion Models without Streaming - [X] Integrate Mistral AI Chat Completion Models with Streaming - [X] Simple Integration Test to test Streaming and non Streaming - [x] Extended Testing including Unit Testing & More Integration Tests ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [X] The code builds clean without any errors or warnings - [X] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [X] All unit tests pass, and I have added new tests where possible - [X] I didn't break anyone :smile: --------- Co-authored-by: Nico Möller <nicomoller@microsoft.com>
2024-07-04 14:39:49 +02:00
# mistralai
mistralai = { version = "^0.4.1", optional = true}
Python: Migrate to Ollama Python SDK (#7165) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> Currently, the Ollama AI connector uses `aiohttp` sessions to interface with the local Ollama LLM service. However, we have made the decision to transition to the official Ollama Python SDK. Related to: https://github.com/microsoft/semantic-kernel/issues/6841, https://github.com/microsoft/semantic-kernel/issues/7134 Test coverage report: ![image](https://github.com/microsoft/semantic-kernel/assets/12570346/8d414f71-fc1b-4870-b038-0f7d2985b985) MyPy warnings: all cleared ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile:
2024-07-16 15:38:21 -07:00
# ollama
ollama = { version = "^0.2.1", optional = true}
# pinecone
pinecone-client = { version = ">=3.0.0", optional = true}
# postgres
psycopg = { version="^3.1.9", extras=["binary","pool"], optional = true}
# qdrant
qdrant-client = { version = '^1.9', optional = true}
# redis
redis = { version = "^4.6.0", optional = true}
# usearch
usearch = { version = "^2.9", optional = true}
Python: Bump pyarrow from 16.1.0 to 17.0.0 in /python (#7396) Bumps [pyarrow](https://github.com/apache/arrow) from 16.1.0 to 17.0.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/apache/arrow/releases">pyarrow's releases</a>.</em></p> <blockquote> <h2>Apache Arrow 17.0.0</h2> <p>Release Notes URL: <a href="https://arrow.apache.org/release/17.0.0.html">https://arrow.apache.org/release/17.0.0.html</a></p> <h2>Apache Arrow 17.0.0 RC2</h2> <p>Release Notes: Release Candidate: 17.0.0 RC2</p> <h2>Apache Arrow 17.0.0 RC1</h2> <p>Release Notes: Release Candidate: 17.0.0 RC1</p> <h2>Apache Arrow 17.0.0 RC0</h2> <p>Release Notes: Release Candidate: 17.0.0 RC0</p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/apache/arrow/commit/6a2e19a852b367c72d7b12da4d104456491ed8b7"><code>6a2e19a</code></a> MINOR: [Release] Update versions for 17.0.0</li> <li><a href="https://github.com/apache/arrow/commit/1a2fff46ee8491756bc617ae8d5d3b6e0d68cc7b"><code>1a2fff4</code></a> MINOR: [Release] Update .deb/.rpm changelogs for 17.0.0</li> <li><a href="https://github.com/apache/arrow/commit/9d4ccccdfebf4c23bafaf72c837c9aad7984f494"><code>9d4cccc</code></a> MINOR: [Release] Update CHANGELOG.md for 17.0.0</li> <li><a href="https://github.com/apache/arrow/commit/bf75923d86f03893d7a64bd577fcc416f7e570b8"><code>bf75923</code></a> <a href="https://redirect.github.com/apache/arrow/issues/43204">GH-43204</a>: [CI][Packaging] Apply vcpkg patch to fix Thrift version (<a href="https://redirect.github.com/apache/arrow/issues/43208">#43208</a>)</li> <li><a href="https://github.com/apache/arrow/commit/e85767a5a76f79715095bb43af02629a4d6deed5"><code>e85767a</code></a> <a href="https://redirect.github.com/apache/arrow/issues/41541">GH-41541</a>: [Go][Parquet] More fixes for writer performance regression (<a href="https://redirect.github.com/apache/arrow/issues/42003">#42003</a>)</li> <li><a href="https://github.com/apache/arrow/commit/5c69895e63412c91f05107564e35c596c3ac34f5"><code>5c69895</code></a> <a href="https://redirect.github.com/apache/arrow/issues/43199">GH-43199</a>: [CI][Packaging] dev/release/utils-create-release-tarball.sh should ...</li> <li><a href="https://github.com/apache/arrow/commit/12be56994b087c71980b8e5d3a962e2d53c79fd2"><code>12be569</code></a> <a href="https://redirect.github.com/apache/arrow/issues/42149">GH-42149</a>: [C++] Use FetchContent for bundled ORC (<a href="https://redirect.github.com/apache/arrow/issues/43011">#43011</a>)</li> <li><a href="https://github.com/apache/arrow/commit/58d51423df7e54bbfb830ee624c815d925424a9f"><code>58d5142</code></a> <a href="https://redirect.github.com/apache/arrow/issues/43158">GH-43158</a>: [Packaging] Use bundled nlohmann/json on AlmaLinux 8/CentOS Stream ...</li> <li><a href="https://github.com/apache/arrow/commit/56a9862b7b3634f5dc6c6802390f586b9c697261"><code>56a9862</code></a> <a href="https://redirect.github.com/apache/arrow/issues/41910">GH-41910</a>: [Python] Add support for Pyodide (<a href="https://redirect.github.com/apache/arrow/issues/37822">#37822</a>)</li> <li><a href="https://github.com/apache/arrow/commit/14e46844e715c3600fb341c36b6527023dce6fa8"><code>14e4684</code></a> <a href="https://redirect.github.com/apache/arrow/issues/43116">GH-43116</a>: [C++][Compute] Mark KeyCompare.CompareColumnsToRowsLarge as large m...</li> <li>Additional commits viewable in <a href="https://github.com/apache/arrow/compare/r-16.1.0...go/v17.0.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pyarrow&package-manager=pip&previous-version=16.1.0&new-version=17.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-01 09:25:52 -04:00
pyarrow = { version = ">=12.0.1,<18.0.0", optional = true}
weaviate-client = { version = ">=3.18,<5.0", optional = true}
ruff = "0.5.2"
2023-03-16 19:54:34 -07:00
[tool.poetry.group.dev.dependencies]
pre-commit = ">=3.7.1"
ruff = ">=0.4.5"
ipykernel = "^6.29.4"
nbconvert = "^7.16.4"
pytest = "^8.2.1"
pytest-xdist = { version="^3.6.1", extras=["psutil"]}
pytest-cov = ">=5.0.0"
pytest-asyncio = "^0.23.7"
2024-03-12 10:27:55 +01:00
snoop = "^0.4.3"
mypy = ">=1.10.0"
types-PyYAML = "^6.0.12.20240311"
2023-03-16 19:54:34 -07:00
[tool.poetry.group.unit-tests]
optional = true
Python: Add Google PaLM connector with text completion and example file (#2076) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Implementation of Google PaLM connector with text completion and an example file to demonstrate its functionality. Closes #1979 ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> 1. Implemented Google Palm connector with text completion 2. Added example file to ```python/samples/kernel-syntax-examples``` 3. Added integration tests with different inputs to kernel.run_async 4. Added unit tests to ensure successful initialization of the class and successful API calls 5. 3 optional arguments (top_k, safety_settings, client) for google.generativeai.generate_text were not included. See more information about the function and its arguments: https://developers.generativeai.google/api/python/google/generativeai/generate_text I also opened a PR for text embedding and chat completion #2258 ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#dev-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile: Currently no warnings, there was 1 warning when first installing genai with `poetry add google.generativeai==v0.1.0rc2` from within poetry shell: "The locked version 0.1.0rc2 for google-generativeai is a yanked version. Reason for being yanked: Release is marked as supporting Py3.8, but in practice it requires 3.9". We would need to require later versions of python to fix it. --------- Co-authored-by: Abby Harrison <54643756+awharrison-28@users.noreply.github.com> Co-authored-by: Abby Harrison <abby.harrison@microsoft.com>
2023-08-17 14:56:26 -07:00
[tool.poetry.group.unit-tests.dependencies]
Python: Azure Model-as-a-Service Python connector (#6742) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Related to: https://github.com/microsoft/semantic-kernel/pull/6693 Azure Model-as-a-Service allows users to deploy certain models from the Azure AI Studio model catalog as an API. This option also provides pay-as-you-go access to the models hosted. Below are some of the models that are supported: - Microsoft Phi-3 family - Meta Llama family (Llama 2 chat & Llama 3 instruct) - Mistral-Small & Mistral-Large - and more We'd like to provide an AI connector for users of SK to use Azure Model-as-a-Service. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> A new AI connector named `azure_ai_inference` is added to support Azure Model-as-a-Service. This connector takes a new dependency on the Python `azure.ai.inference` SDK. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [X] The code builds clean without any errors or warnings - [X] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [X] All unit tests pass, and I have added new tests where possible - [X] I didn't break anyone :smile: --------- Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2024-06-24 15:00:57 -07:00
azure-ai-inference = {version = "^1.0.0b1", allow-prereleases = true}
azure-search-documents = {version = "11.6.0b4", allow-prereleases = true}
azure-core = "^1.28.0"
Python: Adds a memory connector for Azure Cosmos DB for NoSQL (#6195) ### Motivation and Context Azure Cosmos DB is adding Vector Similarity APIs to the NoSQL project, and would like Semantic Kernel users to be able to leverage them. This adds a Memory Connector implementation for Azure Cosmos DB's, including support for the new vector search functionality coming soon in Cosmos DB. <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [ ] The code builds clean without any errors or warnings - [ ] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [ ] All unit tests pass, and I have added new tests where possible - [ ] I didn't break anyone :smile: --------- Co-authored-by: Eduard van Valkenburg <eavanvalkenburg@users.noreply.github.com> Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2024-05-18 05:32:30 -07:00
azure-cosmos = "^4.7.0"
Python: #6499 Mistral AI Chat Completion (#7049) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> 1. Why is this changed required ? To enable Mistral Models with Semantic Kernel, there was the issue #6499 in the Backlog to add a MistralAI Connector 2. What problem does it solve ? It solves the problem, that semantic kernel is not yet integrated with MistralAI 3. What scenario does it contribute to? The scenario is to use different connector than HF,OpenAI or AzureOpenAI. When User's want to use Mistral they can easliy integrate it now 4. If it fixes an open issue, please link to the issue here. #6499 ### Description The changes made are designed by the open_ai connector, i tried to stay as close as possible to the structure. For the integration i installed the mistral python package in the repository. I added the following Classes : - MistrealAIChatPromptExcecutionSettings --> Responsible to administrate the Prompt Execution against MistralAI - MistralAIChatCompletion --> Responsible to coordinate the Classes and for Content parsing - MistralAISettings --> Basic Settings to work with the MistralAIClient **To test the changes with the tests please add MISTRALAI_API_KEY and MISTRALAI_CHAT_MODEL_ID as Enviorment Variable** From a design decision i moved the processing of Functions from Connectors to the ChatCompletionClientBaseClass What is integrated yet : - [X] Integrate Mistral AI ChatCompletion Models without Streaming - [X] Integrate Mistral AI Chat Completion Models with Streaming - [X] Simple Integration Test to test Streaming and non Streaming - [x] Extended Testing including Unit Testing & More Integration Tests ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [X] The code builds clean without any errors or warnings - [X] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [X] All unit tests pass, and I have added new tests where possible - [X] I didn't break anyone :smile: --------- Co-authored-by: Nico Möller <nicomoller@microsoft.com>
2024-07-04 14:39:49 +02:00
mistralai = "^0.4.1"
Python: Migrate to Ollama Python SDK (#7165) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> Currently, the Ollama AI connector uses `aiohttp` sessions to interface with the local Ollama LLM service. However, we have made the decision to transition to the official Ollama Python SDK. Related to: https://github.com/microsoft/semantic-kernel/issues/6841, https://github.com/microsoft/semantic-kernel/issues/7134 Test coverage report: ![image](https://github.com/microsoft/semantic-kernel/assets/12570346/8d414f71-fc1b-4870-b038-0f7d2985b985) MyPy warnings: all cleared ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile:
2024-07-16 15:38:21 -07:00
ollama = "^0.2.1"
Python: Vertex AI Connector (#7481) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> We are adding a new AI connector for the talking to the Gemini API on Google Vertex AI: https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/overview Note that this is for Gemini hosted on Vertex AI. Google also offers Gemini access on their Google AI platform: https://cloud.google.com/vertex-ai/generative-ai/docs/migrate/migrate-google-ai. We will have another connector along with this for accessing Gemini on Google AI. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> The new connector contains 3 AI services: - Chat completion - Text completion - Text embedding TODO: Function calling. > Function calling is not included in this PR to reduce to the size of the PR. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile: --------- Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2024-07-31 09:04:10 -07:00
google-cloud-aiplatform = "^1.60.0"
Python: Google AI Connector (#7419) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> We are adding a new AI connector for the talking to the Gemini API on Google AI: https://ai.google.dev/gemini-api/docs/get-started/tutorial?authuser=1&lang=python. Note that this is for Gemini hosted on Google AI. Google also offers Gemini access on their Vertex AI platform: https://cloud.google.com/vertex-ai/generative-ai/docs/migrate/migrate-google-ai. We will have another connector along with this for accessing Gemini on Vertex AI. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> The new connector contains 3 AI services: - Chat completion - Text completion - Text embedding TODO: Function calling. > Function calling is not included in this PR to reduce to the size of the PR. Unit test coverage: ![image](https://github.com/user-attachments/assets/d4fefaa3-b24a-4b91-b29a-cadec5fa34fb) Mypy: All cleared ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile:
2024-07-24 16:20:36 -07:00
google-generativeai = "^0.7.2"
transformers = { version = "^4.28.1", extras=["torch"]}
sentence-transformers = "^2.2.2"
[tool.poetry.group.tests]
optional = true
[tool.poetry.group.tests.dependencies]
# azure
azure-ai-inference = {version = "^1.0.0b1", allow-prereleases = true}
azure-search-documents = {version = "11.6.0b4", allow-prereleases = true}
azure-core = "^1.28.0"
azure-identity = "^1.13.0"
azure-cosmos = "^4.7.0"
msgraph-sdk = "^1.2.0"
# chroma
chromadb = ">=0.4.13,<0.6.0"
Python: Google AI Connector (#7419) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> We are adding a new AI connector for the talking to the Gemini API on Google AI: https://ai.google.dev/gemini-api/docs/get-started/tutorial?authuser=1&lang=python. Note that this is for Gemini hosted on Google AI. Google also offers Gemini access on their Vertex AI platform: https://cloud.google.com/vertex-ai/generative-ai/docs/migrate/migrate-google-ai. We will have another connector along with this for accessing Gemini on Vertex AI. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> The new connector contains 3 AI services: - Chat completion - Text completion - Text embedding TODO: Function calling. > Function calling is not included in this PR to reduce to the size of the PR. Unit test coverage: ![image](https://github.com/user-attachments/assets/d4fefaa3-b24a-4b91-b29a-cadec5fa34fb) Mypy: All cleared ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile:
2024-07-24 16:20:36 -07:00
# google
Python: Vertex AI Connector (#7481) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> We are adding a new AI connector for the talking to the Gemini API on Google Vertex AI: https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/overview Note that this is for Gemini hosted on Vertex AI. Google also offers Gemini access on their Google AI platform: https://cloud.google.com/vertex-ai/generative-ai/docs/migrate/migrate-google-ai. We will have another connector along with this for accessing Gemini on Google AI. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> The new connector contains 3 AI services: - Chat completion - Text completion - Text embedding TODO: Function calling. > Function calling is not included in this PR to reduce to the size of the PR. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile: --------- Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2024-07-31 09:04:10 -07:00
google-cloud-aiplatform = "^1.60.0"
Python: Google AI Connector (#7419) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> We are adding a new AI connector for the talking to the Gemini API on Google AI: https://ai.google.dev/gemini-api/docs/get-started/tutorial?authuser=1&lang=python. Note that this is for Gemini hosted on Google AI. Google also offers Gemini access on their Vertex AI platform: https://cloud.google.com/vertex-ai/generative-ai/docs/migrate/migrate-google-ai. We will have another connector along with this for accessing Gemini on Vertex AI. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> The new connector contains 3 AI services: - Chat completion - Text completion - Text embedding TODO: Function calling. > Function calling is not included in this PR to reduce to the size of the PR. Unit test coverage: ![image](https://github.com/user-attachments/assets/d4fefaa3-b24a-4b91-b29a-cadec5fa34fb) Mypy: All cleared ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile:
2024-07-24 16:20:36 -07:00
google-generativeai = "^0.7.2"
# hugging face
transformers = { version = "^4.28.1", extras=["torch"]}
sentence-transformers = "^2.2.2"
# milvus
Python: Bump pymilvus from 2.3.7 to 2.4.3 in /python (#6516) Bumps [pymilvus](https://github.com/milvus-io/pymilvus) from 2.3.7 to 2.4.3. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/milvus-io/pymilvus/releases">pymilvus's releases</a>.</em></p> <blockquote> <h2>PyMilvus 2.4.3 ReleaseNotes</h2> <h2>What's Changed</h2> <ul> <li>support the report value in the dml and dql request by <a href="https://github.com/SimFG"><code>@​SimFG</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2092">milvus-io/pymilvus#2092</a></li> <li>enhance: Expand grpcio version to latest by <a href="https://github.com/XuanYang-cn"><code>@​XuanYang-cn</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2096">milvus-io/pymilvus#2096</a></li> </ul> <h2>Bug fixes:</h2> <ul> <li>fix sparse: accpet int/float wrapped in string by <a href="https://github.com/zhengbuqian"><code>@​zhengbuqian</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2095">milvus-io/pymilvus#2095</a></li> <li>fix the str function of the extra list by <a href="https://github.com/SimFG"><code>@​SimFG</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2098">milvus-io/pymilvus#2098</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/milvus-io/pymilvus/compare/v2.4.2...v2.4.3">https://github.com/milvus-io/pymilvus/compare/v2.4.2...v2.4.3</a></p> <h2>PyMilvus 2.4.2 Release Notes</h2> <h2>What's Changed</h2> <ul> <li>Support milvuslite by <a href="https://github.com/junjiejiangjjj"><code>@​junjiejiangjjj</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2075">milvus-io/pymilvus#2075</a></li> <li>Make bulk_writer's requirments optional (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2086">#2086</a>) by <a href="https://github.com/XuanYang-cn"><code>@​XuanYang-cn</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2087">milvus-io/pymilvus#2087</a></li> <li>enhance: Enable set_properties and describe_database api by <a href="https://github.com/weiliu1031"><code>@​weiliu1031</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2084">milvus-io/pymilvus#2084</a></li> </ul> <h2>Bug fixes</h2> <ul> <li>fix: Remove params for property vars by <a href="https://github.com/XuanYang-cn"><code>@​XuanYang-cn</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2069">milvus-io/pymilvus#2069</a></li> <li>change sparse related errors to ParamError by <a href="https://github.com/zhengbuqian"><code>@​zhengbuqian</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2070">milvus-io/pymilvus#2070</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/milvus-io/pymilvus/compare/v2.4.1...v2.4.2">https://github.com/milvus-io/pymilvus/compare/v2.4.1...v2.4.2</a></p> <h2>PyMilvus 2.4.1 Release Notes</h2> <h2>What's Changed</h2> <h3>Bug fixes</h3> <ul> <li>Fix float16_example and bfloat16_example by <a href="https://github.com/czs007"><code>@​czs007</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/1993">milvus-io/pymilvus#1993</a></li> <li>Use wrong placeholder type for bf16 and float16 by <a href="https://github.com/XuanYang-cn"><code>@​XuanYang-cn</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2012">milvus-io/pymilvus#2012</a></li> <li>Restrict input/search type for vector fields (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2025">#2025</a>) by <a href="https://github.com/XuanYang-cn"><code>@​XuanYang-cn</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2029">milvus-io/pymilvus#2029</a></li> <li>Fix import array via bulkwriter (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2035">#2035</a>) by <a href="https://github.com/bigsheeper"><code>@​bigsheeper</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2036">milvus-io/pymilvus#2036</a></li> <li>Pass offset parameter for hybrid search to server by <a href="https://github.com/czs007"><code>@​czs007</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2052">milvus-io/pymilvus#2052</a></li> </ul> <h3>Enhancements</h3> <ul> <li>Add warning about gpu example by <a href="https://github.com/czs007"><code>@​czs007</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/1994">milvus-io/pymilvus#1994</a></li> <li>Update sparse+dense hybrid search example by <a href="https://github.com/zhengbuqian"><code>@​zhengbuqian</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2014">milvus-io/pymilvus#2014</a></li> <li>Convert params' value from string to int by <a href="https://github.com/PowderLi"><code>@​PowderLi</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2026">milvus-io/pymilvus#2026</a></li> <li>Support UDS by <a href="https://github.com/junjiejiangjjj"><code>@​junjiejiangjjj</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2030">milvus-io/pymilvus#2030</a></li> <li>Update hello_hybrid_sparse_dense.py example to include BGE reranker by <a href="https://github.com/zhengbuqian"><code>@​zhengbuqian</code></a> in <a href="https://redirect.github.com/milvus-io/pymilvus/pull/2032">milvus-io/pymilvus#2032</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/milvus-io/pymilvus/compare/v2.4.0...v2.4.1">https://github.com/milvus-io/pymilvus/compare/v2.4.0...v2.4.1</a></p> <h2>PyMilvus 2.4.0 Release Notes</h2> <h3>Milvus 2.4.0-rc.1 related scenario-based features:</h3> <p>Milvus has released version 2.4.0-rc.1. This version of pymilvus incorporates the new features introduced in Milvus 2.4.0-rc.1</p> <ul> <li><strong>New GPU Index named <a href="https://docs.rapids.ai/api/raft/nightly/cpp_api/neighbors_cagra/">CAGRA</a></strong> Thanks to NVIDIA's contribution, this new GPU index provides a 10x performance boost, especially for batch searches. Please refer to the sample code in <code>examples/example_gpu_cagra.py</code></li> <li><strong>Multi-vector and hybrid search</strong> This feature helps store vector embeddings generated from multiple models and conduct multi-vector searches accordingly. Please refer to the sample code in <code>examples/hello_hybrid_sparse_dense.py</code></li> <li><strong>Sparse vectors</strong></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/milvus-io/pymilvus/commit/4351db05927c4a7f15a15be3ddc63a8c447a4994"><code>4351db0</code></a> [2.4] fix the str function of the extra list (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2098">#2098</a>)</li> <li><a href="https://github.com/milvus-io/pymilvus/commit/75200878e4bac5e34cb3882a9aedd023f14cb380"><code>7520087</code></a> enhance: [2.4]Expand grpcio version to latest (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2096">#2096</a>)</li> <li><a href="https://github.com/milvus-io/pymilvus/commit/0979edfd44e2c2a72b602e3edc61e1dcd81bd513"><code>0979edf</code></a> [2.4] fix sparse: accpet int/float wrapped in string (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2095">#2095</a>)</li> <li><a href="https://github.com/milvus-io/pymilvus/commit/c19be3caf64106ab1603cccdf974074ccf6ef3e8"><code>c19be3c</code></a> [2.4] support the report value in the dml and dql request (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2092">#2092</a>)</li> <li><a href="https://github.com/milvus-io/pymilvus/commit/5b5927221cc30a729902b739f1ffe0200e7868e0"><code>5b59272</code></a> fix: Revert of dup cherry-pick <a href="https://redirect.github.com/milvus-io/pymilvus/issues/1949">#1949</a> (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2088">#2088</a>)</li> <li><a href="https://github.com/milvus-io/pymilvus/commit/af202c5472d3fd480b6728580245eb854b7a50e5"><code>af202c5</code></a> enhance: [2.4]Make bulk_writer's requirments optional (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2086">#2086</a>) (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2087">#2087</a>)</li> <li><a href="https://github.com/milvus-io/pymilvus/commit/9fd084f2e7e4689d8f31fb73ceef38212d08eb4b"><code>9fd084f</code></a> enhance: Cherry pick <a href="https://redirect.github.com/milvus-io/pymilvus/issues/1949">#1949</a>, <a href="https://redirect.github.com/milvus-io/pymilvus/issues/2065">#2065</a>, <a href="https://redirect.github.com/milvus-io/pymilvus/issues/2077">#2077</a> (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2085">#2085</a>)</li> <li><a href="https://github.com/milvus-io/pymilvus/commit/e8f4a136992687970f6b29264d67d11f0b712da8"><code>e8f4a13</code></a> enhance: Enable set_properties and describe_database api (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2084">#2084</a>)</li> <li><a href="https://github.com/milvus-io/pymilvus/commit/5598667510e04ceb2dc5822626158bdad8f32aed"><code>5598667</code></a> [2.4] remove scipy dependency for sparse while still supporting it (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2076">#2076</a>)</li> <li><a href="https://github.com/milvus-io/pymilvus/commit/177a789c898cd60bd5218fb4294eb7cc06b73197"><code>177a789</code></a> [2.4]Support milvuslite (<a href="https://redirect.github.com/milvus-io/pymilvus/issues/2075">#2075</a>)</li> <li>Additional commits viewable in <a href="https://github.com/milvus-io/pymilvus/compare/v2.3.7...v2.4.3">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pymilvus&package-manager=pip&previous-version=2.3.7&new-version=2.4.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-13 15:36:19 -04:00
pymilvus = ">=2.3,<2.4.4"
milvus = { version = ">=2.3,<2.3.8", markers = 'sys_platform != "win32"'}
Python: #6499 Mistral AI Chat Completion (#7049) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> 1. Why is this changed required ? To enable Mistral Models with Semantic Kernel, there was the issue #6499 in the Backlog to add a MistralAI Connector 2. What problem does it solve ? It solves the problem, that semantic kernel is not yet integrated with MistralAI 3. What scenario does it contribute to? The scenario is to use different connector than HF,OpenAI or AzureOpenAI. When User's want to use Mistral they can easliy integrate it now 4. If it fixes an open issue, please link to the issue here. #6499 ### Description The changes made are designed by the open_ai connector, i tried to stay as close as possible to the structure. For the integration i installed the mistral python package in the repository. I added the following Classes : - MistrealAIChatPromptExcecutionSettings --> Responsible to administrate the Prompt Execution against MistralAI - MistralAIChatCompletion --> Responsible to coordinate the Classes and for Content parsing - MistralAISettings --> Basic Settings to work with the MistralAIClient **To test the changes with the tests please add MISTRALAI_API_KEY and MISTRALAI_CHAT_MODEL_ID as Enviorment Variable** From a design decision i moved the processing of Functions from Connectors to the ChatCompletionClientBaseClass What is integrated yet : - [X] Integrate Mistral AI ChatCompletion Models without Streaming - [X] Integrate Mistral AI Chat Completion Models with Streaming - [X] Simple Integration Test to test Streaming and non Streaming - [x] Extended Testing including Unit Testing & More Integration Tests ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [X] The code builds clean without any errors or warnings - [X] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [X] All unit tests pass, and I have added new tests where possible - [X] I didn't break anyone :smile: --------- Co-authored-by: Nico Möller <nicomoller@microsoft.com>
2024-07-04 14:39:49 +02:00
# mistralai
mistralai = "^0.4.1"
Python: Migrate to Ollama Python SDK (#7165) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> Currently, the Ollama AI connector uses `aiohttp` sessions to interface with the local Ollama LLM service. However, we have made the decision to transition to the official Ollama Python SDK. Related to: https://github.com/microsoft/semantic-kernel/issues/6841, https://github.com/microsoft/semantic-kernel/issues/7134 Test coverage report: ![image](https://github.com/microsoft/semantic-kernel/assets/12570346/8d414f71-fc1b-4870-b038-0f7d2985b985) MyPy warnings: all cleared ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile:
2024-07-16 15:38:21 -07:00
# ollama
ollama = "^0.2.1"
# mongodb
motor = "^3.3.2"
# pinecone
pinecone-client = ">=3.0.0"
# postgres
2024-03-12 10:27:55 +01:00
psycopg = { version="^3.1.9", extras=["binary","pool"]}
# qdrant
qdrant-client = '^1.9'
# redis
Python: Redis memory connector (#2132) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> 1. .NET supports Redis memory while Python does not 2. Match up Python features w/ .NET 3. Users can utilize a Redis backend for memories 4. Closes #1981 ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> Implemented a memory connector for Redis and made unit tests for it. Also added .env configuration for a connection string. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [X] The code builds clean without any errors or warnings - [X] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#dev-scripts) raises no violations - [X] All unit tests pass, and I have added new tests where possible - [X] I didn't break anyone :smile: --------- Co-authored-by: Dmytro Struk <13853051+dmytrostruk@users.noreply.github.com> Co-authored-by: Abby Harrison <54643756+awharrison-28@users.noreply.github.com> Co-authored-by: Abby Harrison <abby.harrison@microsoft.com>
2023-08-31 13:38:16 -07:00
redis = "^4.6.0"
# usearch
usearch = "^2.9"
Python: Bump pyarrow from 16.1.0 to 17.0.0 in /python (#7396) Bumps [pyarrow](https://github.com/apache/arrow) from 16.1.0 to 17.0.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/apache/arrow/releases">pyarrow's releases</a>.</em></p> <blockquote> <h2>Apache Arrow 17.0.0</h2> <p>Release Notes URL: <a href="https://arrow.apache.org/release/17.0.0.html">https://arrow.apache.org/release/17.0.0.html</a></p> <h2>Apache Arrow 17.0.0 RC2</h2> <p>Release Notes: Release Candidate: 17.0.0 RC2</p> <h2>Apache Arrow 17.0.0 RC1</h2> <p>Release Notes: Release Candidate: 17.0.0 RC1</p> <h2>Apache Arrow 17.0.0 RC0</h2> <p>Release Notes: Release Candidate: 17.0.0 RC0</p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/apache/arrow/commit/6a2e19a852b367c72d7b12da4d104456491ed8b7"><code>6a2e19a</code></a> MINOR: [Release] Update versions for 17.0.0</li> <li><a href="https://github.com/apache/arrow/commit/1a2fff46ee8491756bc617ae8d5d3b6e0d68cc7b"><code>1a2fff4</code></a> MINOR: [Release] Update .deb/.rpm changelogs for 17.0.0</li> <li><a href="https://github.com/apache/arrow/commit/9d4ccccdfebf4c23bafaf72c837c9aad7984f494"><code>9d4cccc</code></a> MINOR: [Release] Update CHANGELOG.md for 17.0.0</li> <li><a href="https://github.com/apache/arrow/commit/bf75923d86f03893d7a64bd577fcc416f7e570b8"><code>bf75923</code></a> <a href="https://redirect.github.com/apache/arrow/issues/43204">GH-43204</a>: [CI][Packaging] Apply vcpkg patch to fix Thrift version (<a href="https://redirect.github.com/apache/arrow/issues/43208">#43208</a>)</li> <li><a href="https://github.com/apache/arrow/commit/e85767a5a76f79715095bb43af02629a4d6deed5"><code>e85767a</code></a> <a href="https://redirect.github.com/apache/arrow/issues/41541">GH-41541</a>: [Go][Parquet] More fixes for writer performance regression (<a href="https://redirect.github.com/apache/arrow/issues/42003">#42003</a>)</li> <li><a href="https://github.com/apache/arrow/commit/5c69895e63412c91f05107564e35c596c3ac34f5"><code>5c69895</code></a> <a href="https://redirect.github.com/apache/arrow/issues/43199">GH-43199</a>: [CI][Packaging] dev/release/utils-create-release-tarball.sh should ...</li> <li><a href="https://github.com/apache/arrow/commit/12be56994b087c71980b8e5d3a962e2d53c79fd2"><code>12be569</code></a> <a href="https://redirect.github.com/apache/arrow/issues/42149">GH-42149</a>: [C++] Use FetchContent for bundled ORC (<a href="https://redirect.github.com/apache/arrow/issues/43011">#43011</a>)</li> <li><a href="https://github.com/apache/arrow/commit/58d51423df7e54bbfb830ee624c815d925424a9f"><code>58d5142</code></a> <a href="https://redirect.github.com/apache/arrow/issues/43158">GH-43158</a>: [Packaging] Use bundled nlohmann/json on AlmaLinux 8/CentOS Stream ...</li> <li><a href="https://github.com/apache/arrow/commit/56a9862b7b3634f5dc6c6802390f586b9c697261"><code>56a9862</code></a> <a href="https://redirect.github.com/apache/arrow/issues/41910">GH-41910</a>: [Python] Add support for Pyodide (<a href="https://redirect.github.com/apache/arrow/issues/37822">#37822</a>)</li> <li><a href="https://github.com/apache/arrow/commit/14e46844e715c3600fb341c36b6527023dce6fa8"><code>14e4684</code></a> <a href="https://redirect.github.com/apache/arrow/issues/43116">GH-43116</a>: [C++][Compute] Mark KeyCompare.CompareColumnsToRowsLarge as large m...</li> <li>Additional commits viewable in <a href="https://github.com/apache/arrow/compare/r-16.1.0...go/v17.0.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pyarrow&package-manager=pip&previous-version=16.1.0&new-version=17.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-01 09:25:52 -04:00
pyarrow = ">=12.0.1,<18.0.0"
# weaviate
weaviate-client = ">=3.18,<5.0"
Python: Adding USearch memory connector (#2358) ### Motivation and Context The integration of [USearch](https://github.com/unum-cloud/usearch) as a memory connector to Semantic Kernel (SK). ### Description The USearch `Index` does not natively have the ability to store different collections, and it only stores embeddings without other attributes like `MemoryRecord`. The `USearchMemoryStore` class encapsulates these capabilities. It uses the USearch `Index` to store a collection of embeddings under unique IDs, with original collection names mapped to those IDs. Other `MemoryRecord ` attributes are stored in a `pyarrow.Table`, which is mapped to each collection. It's important to note the current behavior when a user removes a record or upserts a new one with an existing ID: the old row is not removed from the `pyarrow.Table`. This is done for performance reasons but could lead to the table growing in size. By default, `USearchMemoryStore` operates as an in-memory store. To enable persistence, you must set the persist mode with calling appropriate `__init__ `, supplying a path to the directory for the persist files. For each collection, two files will be created: `{collection_name}.usearch` and `{collection_name}.parquet`. Changes will only be dumped to the disk when `close_async` is called. Due to the interface provided by the base class `MemoryStoreBase`, this happens implicitly when using a context manager, or it may be called explicitly. Since collection names are used to store files on disk, all names are converted to lowercase. To ensure efficient use of memory, you should call `close_async`. --------- Co-authored-by: Abby Harrison <abby.harrison@microsoft.com> Co-authored-by: Abby Harrison <54643756+awharrison-28@users.noreply.github.com> Co-authored-by: Devis Lucato <dluc@users.noreply.github.com>
2023-08-24 03:18:36 +04:00
# Extras are exposed to pip, this allows a user to easily add the right dependencies to their environment
[tool.poetry.extras]
Python: Google AI Connector (#7419) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> We are adding a new AI connector for the talking to the Gemini API on Google AI: https://ai.google.dev/gemini-api/docs/get-started/tutorial?authuser=1&lang=python. Note that this is for Gemini hosted on Google AI. Google also offers Gemini access on their Vertex AI platform: https://cloud.google.com/vertex-ai/generative-ai/docs/migrate/migrate-google-ai. We will have another connector along with this for accessing Gemini on Vertex AI. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> The new connector contains 3 AI services: - Chat completion - Text completion - Text embedding TODO: Function calling. > Function calling is not included in this PR to reduce to the size of the PR. Unit test coverage: ![image](https://github.com/user-attachments/assets/d4fefaa3-b24a-4b91-b29a-cadec5fa34fb) Mypy: All cleared ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile:
2024-07-24 16:20:36 -07:00
all = ["transformers", "sentence-transformers", "qdrant-client", "chromadb", "pymilvus", "milvus", "mistralai", "ollama", "google", "weaviate-client", "pinecone-client", "psycopg", "redis", "azure-ai-inference", "azure-search-documents", "azure-core", "azure-identity", "azure-cosmos", "usearch", "pyarrow", "ipykernel", "motor"]
azure = ["azure-ai-inference", "azure-search-documents", "azure-core", "azure-identity", "azure-cosmos", "msgraph-sdk"]
chromadb = ["chromadb"]
Python: Vertex AI Connector (#7481) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> We are adding a new AI connector for the talking to the Gemini API on Google Vertex AI: https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/overview Note that this is for Gemini hosted on Vertex AI. Google also offers Gemini access on their Google AI platform: https://cloud.google.com/vertex-ai/generative-ai/docs/migrate/migrate-google-ai. We will have another connector along with this for accessing Gemini on Google AI. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> The new connector contains 3 AI services: - Chat completion - Text completion - Text embedding TODO: Function calling. > Function calling is not included in this PR to reduce to the size of the PR. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile: --------- Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2024-07-31 09:04:10 -07:00
google = ["google-cloud-aiplatform", "google-generativeai"]
hugging_face = ["transformers", "sentence-transformers"]
milvus = ["pymilvus", "milvus"]
Python: #6499 Mistral AI Chat Completion (#7049) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> 1. Why is this changed required ? To enable Mistral Models with Semantic Kernel, there was the issue #6499 in the Backlog to add a MistralAI Connector 2. What problem does it solve ? It solves the problem, that semantic kernel is not yet integrated with MistralAI 3. What scenario does it contribute to? The scenario is to use different connector than HF,OpenAI or AzureOpenAI. When User's want to use Mistral they can easliy integrate it now 4. If it fixes an open issue, please link to the issue here. #6499 ### Description The changes made are designed by the open_ai connector, i tried to stay as close as possible to the structure. For the integration i installed the mistral python package in the repository. I added the following Classes : - MistrealAIChatPromptExcecutionSettings --> Responsible to administrate the Prompt Execution against MistralAI - MistralAIChatCompletion --> Responsible to coordinate the Classes and for Content parsing - MistralAISettings --> Basic Settings to work with the MistralAIClient **To test the changes with the tests please add MISTRALAI_API_KEY and MISTRALAI_CHAT_MODEL_ID as Enviorment Variable** From a design decision i moved the processing of Functions from Connectors to the ChatCompletionClientBaseClass What is integrated yet : - [X] Integrate Mistral AI ChatCompletion Models without Streaming - [X] Integrate Mistral AI Chat Completion Models with Streaming - [X] Simple Integration Test to test Streaming and non Streaming - [x] Extended Testing including Unit Testing & More Integration Tests ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [X] The code builds clean without any errors or warnings - [X] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [X] All unit tests pass, and I have added new tests where possible - [X] I didn't break anyone :smile: --------- Co-authored-by: Nico Möller <nicomoller@microsoft.com>
2024-07-04 14:39:49 +02:00
mistralai = ["mistralai"]
Python: Migrate to Ollama Python SDK (#7165) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> Currently, the Ollama AI connector uses `aiohttp` sessions to interface with the local Ollama LLM service. However, we have made the decision to transition to the official Ollama Python SDK. Related to: https://github.com/microsoft/semantic-kernel/issues/6841, https://github.com/microsoft/semantic-kernel/issues/7134 Test coverage report: ![image](https://github.com/microsoft/semantic-kernel/assets/12570346/8d414f71-fc1b-4870-b038-0f7d2985b985) MyPy warnings: all cleared ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile:
2024-07-16 15:38:21 -07:00
ollama = ["ollama"]
mongo = ["motor"]
notebooks = ["ipykernel"]
pinecone = ["pinecone-client"]
postgres = ["psycopg"]
qdrant = ["qdrant-client"]
redis = ["redis"]
usearch = ["usearch", "pyarrow"]
weaviate = ["weaviate-client"]
[tool.pytest.ini_options]
addopts = "-ra -q -r fEX -n logical --dist loadfile --dist worksteal"
[tool.ruff]
line-length = 120
target-version = "py310"
include = ["*.py", "*.pyi", "**/pyproject.toml", "*.ipynb"]
[tool.ruff.lint]
preview = true
select = ["D", "E", "F", "I", "CPY", "ISC", "INP", "RSE102", "RET", "SIM", "TD", "FIX", "ERA001", "RUF"]
ignore = ["D100", "D101", "D104", "TD003", "FIX002"]
[tool.ruff.lint.pydocstyle]
convention = "google"
[tool.ruff.lint.per-file-ignores]
# Ignore all directories named `tests` and `samples`.
"tests/**" = ["D", "INP", "TD", "ERA001", "RUF"]
"samples/**" = ["D", "INP", "ERA001", "RUF"]
# Ignore all files that end in `_test.py`.
"*_test.py" = ["D"]
"*.ipynb" = ["CPY", "E501"]
[tool.ruff.lint.flake8-copyright]
notice-rgx = "^# Copyright \\(c\\) Microsoft\\. All rights reserved\\."
min-file-size = 1
[tool.bandit]
targets = ["python/semantic_kernel"]
exclude_dirs = ["python/tests"]
2023-03-16 19:54:34 -07:00
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"