SIGN IN SIGN UP

Integrate cutting-edge LLM technology quickly and easily into your apps

0 0 24 C#
Python: restructure integration tests (#9331) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Our integration tests for the embedding services and memory are not set up in a way that makes it easy for developers to add test cases. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> 1. Restructure the tests so that it's easier to add new tests when a new service is created. 2. Reduce duplicate code. 3. Add Ollama embedding service tests. > Note: Besides the new Ollama tests, no other tests are added or dropped. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile: --------- Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2024-10-23 15:08:44 -07:00
# Copyright (c) Microsoft. All rights reserved.
import asyncio
import logging
import os
import platform
from collections.abc import Awaitable, Callable
from typing import Any
Python: restructure integration tests (#9331) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Our integration tests for the embedding services and memory are not set up in a way that makes it easy for developers to add test cases. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> 1. Restructure the tests so that it's easier to add new tests when a new service is created. 2. Reduce duplicate code. 3. Add Ollama embedding service tests. > Note: Besides the new Ollama tests, no other tests are added or dropped. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile: --------- Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2024-10-23 15:08:44 -07:00
logger = logging.getLogger(__name__)
Python: restructure integration tests (#9331) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Our integration tests for the embedding services and memory are not set up in a way that makes it easy for developers to add test cases. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> 1. Restructure the tests so that it's easier to add new tests when a new service is created. 2. Reduce duplicate code. 3. Add Ollama embedding service tests. > Note: Besides the new Ollama tests, no other tests are added or dropped. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile: --------- Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2024-10-23 15:08:44 -07:00
async def retry(
func: Callable[..., Awaitable[Any]],
retries: int = 20,
reset: Callable[..., None] | None = None,
name: str | None = None,
):
"""Retry the function if it raises an exception.
Args:
func (function): The function to retry.
retries (int): Number of retries.
reset (function): Function to reset the state of any variables used in the function
"""
logger.info(f"Running {retries} retries with func: {name or func.__module__}")
Python: restructure integration tests (#9331) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Our integration tests for the embedding services and memory are not set up in a way that makes it easy for developers to add test cases. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> 1. Restructure the tests so that it's easier to add new tests when a new service is created. 2. Reduce duplicate code. 3. Add Ollama embedding service tests. > Note: Besides the new Ollama tests, no other tests are added or dropped. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile: --------- Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2024-10-23 15:08:44 -07:00
for i in range(retries):
logger.info(f" Try {i + 1} for {name or func.__module__}")
Python: restructure integration tests (#9331) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Our integration tests for the embedding services and memory are not set up in a way that makes it easy for developers to add test cases. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> 1. Restructure the tests so that it's easier to add new tests when a new service is created. 2. Reduce duplicate code. 3. Add Ollama embedding service tests. > Note: Besides the new Ollama tests, no other tests are added or dropped. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile: --------- Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2024-10-23 15:08:44 -07:00
try:
if reset:
reset()
Python: restructure integration tests (#9331) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Our integration tests for the embedding services and memory are not set up in a way that makes it easy for developers to add test cases. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> 1. Restructure the tests so that it's easier to add new tests when a new service is created. 2. Reduce duplicate code. 3. Add Ollama embedding service tests. > Note: Besides the new Ollama tests, no other tests are added or dropped. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile: --------- Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2024-10-23 15:08:44 -07:00
return await func()
except Exception as e:
logger.warning(f" On try {i + 1} got this error: {e}")
Python: restructure integration tests (#9331) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Our integration tests for the embedding services and memory are not set up in a way that makes it easy for developers to add test cases. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> 1. Restructure the tests so that it's easier to add new tests when a new service is created. 2. Reduce duplicate code. 3. Add Ollama embedding service tests. > Note: Besides the new Ollama tests, no other tests are added or dropped. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile: --------- Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2024-10-23 15:08:44 -07:00
if i == retries - 1: # Last retry
raise
# Binary exponential backoff
backoff = 2**i
logger.info(f" Sleeping for {backoff} seconds before retrying")
await asyncio.sleep(backoff)
Python: restructure integration tests (#9331) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Our integration tests for the embedding services and memory are not set up in a way that makes it easy for developers to add test cases. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> 1. Restructure the tests so that it's easier to add new tests when a new service is created. 2. Reduce duplicate code. 3. Add Ollama embedding service tests. > Note: Besides the new Ollama tests, no other tests are added or dropped. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile: --------- Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2024-10-23 15:08:44 -07:00
return None
Python: Raise exceptions when services are not set up in integration test workflow (#9874) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Our integration tests should encompass all test cases unless a case is explicitly marked as skipped or expected to fail (xfail). Currently, our setup permits test cases to be skipped if the required environment variables are not configured. While this approach is beneficial for local testing—since not all developers have access to all necessary resources - it presents challenges in our pipeline. In this environment, changes to environment variables can lead to test cases being skipped without notice, potentially allowing issues to go undetected until a specific connector is reported as problematic. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> The `is_service_setup_for_testing` now also reads the `raise_if_not_set` argument from an environment variable (`INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION`). When `INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION` is not set nor the argument is not passed in, the method will not raise, allowing people to run the integration tests locally. When `INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION` is set or `raise_if_not_set` is explicitly set to true, test collection will fail. `INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION` is set to true in our pipeline environment. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile:
2024-12-03 13:08:04 -08:00
def is_service_setup_for_testing(env_var_names: list[str], raise_if_not_set: bool | None = None) -> bool:
Python: restructure integration tests (#9331) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Our integration tests for the embedding services and memory are not set up in a way that makes it easy for developers to add test cases. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> 1. Restructure the tests so that it's easier to add new tests when a new service is created. 2. Reduce duplicate code. 3. Add Ollama embedding service tests. > Note: Besides the new Ollama tests, no other tests are added or dropped. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile: --------- Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2024-10-23 15:08:44 -07:00
"""Check if the environment variables are set and not empty.
Python: Raise exceptions when services are not set up in integration test workflow (#9874) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Our integration tests should encompass all test cases unless a case is explicitly marked as skipped or expected to fail (xfail). Currently, our setup permits test cases to be skipped if the required environment variables are not configured. While this approach is beneficial for local testing—since not all developers have access to all necessary resources - it presents challenges in our pipeline. In this environment, changes to environment variables can lead to test cases being skipped without notice, potentially allowing issues to go undetected until a specific connector is reported as problematic. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> The `is_service_setup_for_testing` now also reads the `raise_if_not_set` argument from an environment variable (`INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION`). When `INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION` is not set nor the argument is not passed in, the method will not raise, allowing people to run the integration tests locally. When `INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION` is set or `raise_if_not_set` is explicitly set to true, test collection will fail. `INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION` is set to true in our pipeline environment. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile:
2024-12-03 13:08:04 -08:00
Returns True if all the environment variables in the list are set and not empty. Otherwise, returns False.
This method can also be configured to raise an exception if the environment variables are not set.
Python: restructure integration tests (#9331) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Our integration tests for the embedding services and memory are not set up in a way that makes it easy for developers to add test cases. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> 1. Restructure the tests so that it's easier to add new tests when a new service is created. 2. Reduce duplicate code. 3. Add Ollama embedding service tests. > Note: Besides the new Ollama tests, no other tests are added or dropped. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile: --------- Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2024-10-23 15:08:44 -07:00
Python: Raise exceptions when services are not set up in integration test workflow (#9874) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Our integration tests should encompass all test cases unless a case is explicitly marked as skipped or expected to fail (xfail). Currently, our setup permits test cases to be skipped if the required environment variables are not configured. While this approach is beneficial for local testing—since not all developers have access to all necessary resources - it presents challenges in our pipeline. In this environment, changes to environment variables can lead to test cases being skipped without notice, potentially allowing issues to go undetected until a specific connector is reported as problematic. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> The `is_service_setup_for_testing` now also reads the `raise_if_not_set` argument from an environment variable (`INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION`). When `INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION` is not set nor the argument is not passed in, the method will not raise, allowing people to run the integration tests locally. When `INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION` is set or `raise_if_not_set` is explicitly set to true, test collection will fail. `INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION` is set to true in our pipeline environment. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile:
2024-12-03 13:08:04 -08:00
By default, this function does not raise an exception if the environment variables in the list are not set.
To raise an exception, set the environment variable `INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION` to `true`,
or set the `raise_if_not_set` parameter to `True`.
On local testing, not raising an exception is useful to avoid the need to set up all services.
On CI, the environment variables should be set, and the tests should fail if they are not set.
Python: restructure integration tests (#9331) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Our integration tests for the embedding services and memory are not set up in a way that makes it easy for developers to add test cases. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> 1. Restructure the tests so that it's easier to add new tests when a new service is created. 2. Reduce duplicate code. 3. Add Ollama embedding service tests. > Note: Besides the new Ollama tests, no other tests are added or dropped. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile: --------- Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2024-10-23 15:08:44 -07:00
Args:
env_var_names (list[str]): Environment variable names.
Python: Raise exceptions when services are not set up in integration test workflow (#9874) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Our integration tests should encompass all test cases unless a case is explicitly marked as skipped or expected to fail (xfail). Currently, our setup permits test cases to be skipped if the required environment variables are not configured. While this approach is beneficial for local testing—since not all developers have access to all necessary resources - it presents challenges in our pipeline. In this environment, changes to environment variables can lead to test cases being skipped without notice, potentially allowing issues to go undetected until a specific connector is reported as problematic. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> The `is_service_setup_for_testing` now also reads the `raise_if_not_set` argument from an environment variable (`INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION`). When `INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION` is not set nor the argument is not passed in, the method will not raise, allowing people to run the integration tests locally. When `INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION` is set or `raise_if_not_set` is explicitly set to true, test collection will fail. `INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION` is set to true in our pipeline environment. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile:
2024-12-03 13:08:04 -08:00
raise_if_not_set (bool | None): Raise an exception if the environment variables are not set.
Python: restructure integration tests (#9331) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Our integration tests for the embedding services and memory are not set up in a way that makes it easy for developers to add test cases. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> 1. Restructure the tests so that it's easier to add new tests when a new service is created. 2. Reduce duplicate code. 3. Add Ollama embedding service tests. > Note: Besides the new Ollama tests, no other tests are added or dropped. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile: --------- Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2024-10-23 15:08:44 -07:00
"""
Python: Raise exceptions when services are not set up in integration test workflow (#9874) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Our integration tests should encompass all test cases unless a case is explicitly marked as skipped or expected to fail (xfail). Currently, our setup permits test cases to be skipped if the required environment variables are not configured. While this approach is beneficial for local testing—since not all developers have access to all necessary resources - it presents challenges in our pipeline. In this environment, changes to environment variables can lead to test cases being skipped without notice, potentially allowing issues to go undetected until a specific connector is reported as problematic. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> The `is_service_setup_for_testing` now also reads the `raise_if_not_set` argument from an environment variable (`INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION`). When `INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION` is not set nor the argument is not passed in, the method will not raise, allowing people to run the integration tests locally. When `INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION` is set or `raise_if_not_set` is explicitly set to true, test collection will fail. `INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION` is set to true in our pipeline environment. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile:
2024-12-03 13:08:04 -08:00
if raise_if_not_set is None:
raise_if_not_set = os.getenv("INTEGRATION_TEST_SERVICE_SETUP_EXCEPTION", "false").lower() == "true"
Python: restructure integration tests (#9331) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Our integration tests for the embedding services and memory are not set up in a way that makes it easy for developers to add test cases. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> 1. Restructure the tests so that it's easier to add new tests when a new service is created. 2. Reduce duplicate code. 3. Add Ollama embedding service tests. > Note: Besides the new Ollama tests, no other tests are added or dropped. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile: --------- Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2024-10-23 15:08:44 -07:00
def does_env_var_exist(env_var_name):
exist = os.getenv(env_var_name, False)
Python: restructure integration tests (#9331) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Our integration tests for the embedding services and memory are not set up in a way that makes it easy for developers to add test cases. ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> 1. Restructure the tests so that it's easier to add new tests when a new service is created. 2. Reduce duplicate code. 3. Add Ollama embedding service tests. > Note: Besides the new Ollama tests, no other tests are added or dropped. ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone :smile: --------- Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
2024-10-23 15:08:44 -07:00
if not exist and raise_if_not_set:
raise KeyError(f"Environment variable {env_var_name} is not set.")
return exist
return all([does_env_var_exist(name) for name in env_var_names])
def is_test_running_on_supported_platforms(platforms: list[str]) -> bool:
"""Check if the test is running on supported platforms.
Args:
platforms (list[str]): List of supported platforms.
"""
return platform.system() in platforms