2023-05-22 12:49:24 -07:00
{
2024-05-07 16:05:00 -05:00
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "3c93ac5b",
"metadata": {},
"source": [
"# Running Native Functions\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "40201641",
"metadata": {},
"source": [
"Two of the previous notebooks showed how to [execute semantic functions inline](./03-semantic-function-inline.ipynb) and how to [run prompts from a file](./02-running-prompts-from-file.ipynb).\n",
"\n",
"In this notebook, we'll show how to use native functions from a file. We will also show how to call semantic functions from native functions.\n",
"\n",
"This can be useful in a few scenarios:\n",
"\n",
"- Writing logic around how to run a prompt that changes the prompt's outcome.\n",
"- Using external data sources to gather data to concatenate into your prompt.\n",
"- Validating user input data prior to sending it to the LLM prompt.\n",
"\n",
"Native functions are defined using standard Python code. The structure is simple, but not well documented at this point.\n",
"\n",
"The following examples are intended to help guide new users towards successful native & semantic function use with the SK Python framework.\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "d90b0c13",
"metadata": {},
"source": [
"Prepare a semantic kernel instance first, loading also the AI service settings defined in the [Setup notebook](00-getting-started.ipynb):\n"
]
},
2024-06-14 16:58:47 -04:00
{
"cell_type": "markdown",
"id": "f39125a5",
"metadata": {},
"source": [
"Import Semantic Kernel SDK from pypi.org"
]
},
2024-05-07 16:05:00 -05:00
{
"cell_type": "code",
"execution_count": null,
"id": "1da651d4",
"metadata": {},
"outputs": [],
"source": [
2024-08-27 18:45:09 +02:00
"# Note: if using a virtual environment, do not run this cell\n",
"%pip install -U semantic-kernel\n",
"from semantic_kernel import __version__\n",
"\n",
"__version__"
2024-06-13 11:48:16 -04:00
]
},
2024-06-14 16:58:47 -04:00
{
"cell_type": "markdown",
"id": "5f726252",
"metadata": {},
"source": [
"Initial configuration for the notebook to run properly."
]
},
2024-06-13 11:48:16 -04:00
{
"cell_type": "code",
"execution_count": null,
"id": "ecfe74be",
"metadata": {},
"outputs": [],
"source": [
"# Make sure paths are correct for the imports\n",
"\n",
"import os\n",
2024-06-14 16:58:47 -04:00
"import sys\n",
2024-06-13 11:48:16 -04:00
"\n",
2024-06-14 16:58:47 -04:00
"notebook_dir = os.path.abspath(\"\")\n",
2024-06-13 11:48:16 -04:00
"parent_dir = os.path.dirname(notebook_dir)\n",
"grandparent_dir = os.path.dirname(parent_dir)\n",
"\n",
"\n",
"sys.path.append(grandparent_dir)"
2024-05-07 16:05:00 -05:00
]
},
2024-06-14 16:58:47 -04:00
{
"cell_type": "markdown",
"id": "73a7fd96",
"metadata": {},
"source": [
"### Configuring the Kernel\n",
"\n",
"Let's get started with the necessary configuration to run Semantic Kernel. For Notebooks, we require a `.env` file with the proper settings for the model you use. Create a new file named `.env` and place it in this directory. Copy the contents of the `.env.example` file from this directory and paste it into the `.env` file that you just created.\n",
"\n",
"**NOTE: Please make sure to include `GLOBAL_LLM_SERVICE` set to either OpenAI, AzureOpenAI, or HuggingFace in your .env file. If this setting is not included, the Service will default to AzureOpenAI.**\n",
"\n",
"#### Option 1: using OpenAI\n",
"\n",
"Add your [OpenAI Key](https://openai.com/product/) key to your `.env` file (org Id only if you have multiple orgs):\n",
"\n",
"```\n",
"GLOBAL_LLM_SERVICE=\"OpenAI\"\n",
"OPENAI_API_KEY=\"sk-...\"\n",
"OPENAI_ORG_ID=\"\"\n",
"OPENAI_CHAT_MODEL_ID=\"\"\n",
"OPENAI_TEXT_MODEL_ID=\"\"\n",
"OPENAI_EMBEDDING_MODEL_ID=\"\"\n",
"```\n",
"The names should match the names used in the `.env` file, as shown above.\n",
"\n",
"#### Option 2: using Azure OpenAI\n",
"\n",
"Add your [Azure Open AI Service key](https://learn.microsoft.com/azure/cognitive-services/openai/quickstart?pivots=programming-language-studio) settings to the `.env` file in the same folder:\n",
"\n",
"```\n",
"GLOBAL_LLM_SERVICE=\"AzureOpenAI\"\n",
"AZURE_OPENAI_API_KEY=\"...\"\n",
"AZURE_OPENAI_ENDPOINT=\"https://...\"\n",
"AZURE_OPENAI_CHAT_DEPLOYMENT_NAME=\"...\"\n",
"AZURE_OPENAI_TEXT_DEPLOYMENT_NAME=\"...\"\n",
"AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME=\"...\"\n",
"AZURE_OPENAI_API_VERSION=\"...\"\n",
"```\n",
"The names should match the names used in the `.env` file, as shown above.\n",
"\n",
2025-08-25 11:29:13 -07:00
"As alternative to `AZURE_OPENAI_API_KEY`, it's possible to authenticate using `credential` parameter, more information here: [Azure Identity](https://learn.microsoft.com/en-us/python/api/overview/azure/identity-readme).\n",
"\n",
"In the following example, `AzureCliCredential` is used. To authenticate using Azure CLI:\n",
"\n",
"1. Install [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli).\n",
"2. Run `az login` command in terminal and follow the authentication steps.\n",
"\n",
2024-06-14 16:58:47 -04:00
"For more advanced configuration, please follow the steps outlined in the [setup guide](./CONFIGURING_THE_KERNEL.md)."
]
},
{
"cell_type": "markdown",
"id": "9a888bb7",
"metadata": {},
"source": [
"We will load our settings and get the LLM service to use for the notebook."
]
},
2024-05-07 16:05:00 -05:00
{
"cell_type": "code",
"execution_count": null,
"id": "fddb5403",
"metadata": {},
"outputs": [],
"source": [
"from services import Service\n",
"\n",
2024-06-03 15:58:02 +02:00
"from samples.service_settings import ServiceSettings\n",
"\n",
Python: allow settings to be created directly (#11468)
### Motivation and Context
<!-- Thank you for your contribution to the semantic-kernel repo!
Please help reviewers and future users, providing the following
information:
1. Why is this change required?
2. What problem does it solve?
3. What scenario does it contribute to?
4. If it fixes an open issue, please link to the issue here.
-->
I've always hated having to add the .create to a settings object, this
removes that, you can just use the regular init, added benefit is that
it has the proper fields in the docstrings for each implemented settings
object!
### Description
<!-- Describe your changes, the overall approach, the underlying design.
These notes will help understanding how your code works. Thanks! -->
Adds a __new__ method to the base settings class, which takes the prefix
and if supplied the env_file and encoding, then calls model_rebuild and
then the init.
### Contribution Checklist
<!-- Before submitting this PR, please make sure: -->
- [x] The code builds clean without any errors or warnings
- [x] The PR follows the [SK Contribution
Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md)
and the [pre-submission formatting
script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts)
raises no violations
- [x] All unit tests pass, and I have added new tests where possible
- [x] I didn't break anyone :smile:
2025-04-11 08:26:22 +02:00
"service_settings = ServiceSettings()\n",
2024-06-03 15:58:02 +02:00
"\n",
2024-05-07 16:05:00 -05:00
"# Select a service to use for this notebook (available services: OpenAI, AzureOpenAI, HuggingFace)\n",
2024-06-03 15:58:02 +02:00
"selectedService = (\n",
" Service.AzureOpenAI\n",
" if service_settings.global_llm_service is None\n",
" else Service(service_settings.global_llm_service.lower())\n",
")\n",
"print(f\"Using service type: {selectedService}\")"
2024-05-07 16:05:00 -05:00
]
},
2024-06-14 16:58:47 -04:00
{
"cell_type": "markdown",
"id": "fcee3dc1",
"metadata": {},
"source": [
"We now configure our Chat Completion service on the kernel."
]
},
2024-05-07 16:05:00 -05:00
{
"cell_type": "code",
"execution_count": null,
"id": "dd150646",
"metadata": {},
"outputs": [],
"source": [
"from semantic_kernel import Kernel\n",
"\n",
"kernel = Kernel()\n",
"\n",
2024-06-14 16:58:47 -04:00
"service_id = None\n",
2024-05-07 16:05:00 -05:00
"if selectedService == Service.OpenAI:\n",
2024-06-14 16:58:47 -04:00
" from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion\n",
"\n",
" service_id = \"default\"\n",
" kernel.add_service(\n",
" OpenAIChatCompletion(\n",
" service_id=service_id,\n",
" ),\n",
" )\n",
"elif selectedService == Service.AzureOpenAI:\n",
2025-08-25 11:29:13 -07:00
" from azure.identity import AzureCliCredential\n",
"\n",
2024-06-14 16:58:47 -04:00
" from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion\n",
"\n",
" service_id = \"default\"\n",
" kernel.add_service(\n",
2025-08-25 11:29:13 -07:00
" AzureChatCompletion(service_id=service_id, credential=AzureCliCredential()),\n",
2024-06-14 16:58:47 -04:00
" )"
2024-05-07 16:05:00 -05:00
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "186767f8",
"metadata": {},
"source": [
"Let's create a **native** function that gives us a random number between 3 and a user input as the upper limit. We'll use this number to create 3-x paragraphs of text when passed to a semantic function.\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "589733c5",
"metadata": {},
"source": [
"First, let's create our native function.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ae29c207",
"metadata": {},
"outputs": [],
"source": [
"import random\n",
"\n",
"from semantic_kernel.functions import kernel_function\n",
"\n",
"\n",
"class GenerateNumberPlugin:\n",
" \"\"\"\n",
" Description: Generate a number between 3-x.\n",
" \"\"\"\n",
"\n",
" @kernel_function(\n",
" description=\"Generate a random number between 3-x\",\n",
" name=\"GenerateNumberThreeOrHigher\",\n",
" )\n",
" def generate_number_three_or_higher(self, input: str) -> str:\n",
" \"\"\"\n",
" Generate a number between 3-<input>\n",
" Example:\n",
" \"8\" => rand(3,8)\n",
" Args:\n",
" input -- The upper limit for the random number generation\n",
" Returns:\n",
" int value\n",
" \"\"\"\n",
" try:\n",
" return str(random.randint(3, int(input)))\n",
" except ValueError as e:\n",
" print(f\"Invalid input {input}\")\n",
" raise e"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "f26b90c4",
"metadata": {},
"source": [
"Next, let's create a semantic function that accepts a number as `{{$input}}` and generates that number of paragraphs about two Corgis on an adventure. `$input` is a default variable semantic functions can use.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7890943f",
"metadata": {},
"outputs": [],
"source": [
2024-06-14 16:58:47 -04:00
"from semantic_kernel.connectors.ai.open_ai import AzureChatPromptExecutionSettings, OpenAIChatPromptExecutionSettings\n",
2024-05-07 16:05:00 -05:00
"from semantic_kernel.prompt_template import InputVariable, PromptTemplateConfig\n",
"\n",
"prompt = \"\"\"\n",
"Write a short story about two Corgis on an adventure.\n",
"The story must be:\n",
"- G rated\n",
"- Have a positive message\n",
"- No sexism, racism or other bias/bigotry\n",
"- Be exactly {{$input}} paragraphs long. It must be this length.\n",
"\"\"\"\n",
"\n",
"if selectedService == Service.OpenAI:\n",
" execution_settings = OpenAIChatPromptExecutionSettings(\n",
" service_id=service_id,\n",
2024-06-13 11:48:16 -04:00
" ai_model_id=\"gpt-3.5-turbo\",\n",
2024-05-07 16:05:00 -05:00
" max_tokens=2000,\n",
" temperature=0.7,\n",
" )\n",
"elif selectedService == Service.AzureOpenAI:\n",
2024-06-14 16:58:47 -04:00
" execution_settings = AzureChatPromptExecutionSettings(\n",
2024-05-07 16:05:00 -05:00
" service_id=service_id,\n",
Python: Introduce Pydantic settings (#6193)
### Motivation and Context
SK Python is tightly coupled to the use of a `.env` file to read all
secrets, keys, endpoints, and more. This doesn't scale well for users
who wish to be able to use environment variables with their SK
Applications. By introducing Pydantic Settings, it is possible to use
both environment variables as well as have a fall-back to a `.env` file
(via a `env_file_path` parameter), if desired.
By introducing Pydantic Settings, we are removing the requirement to
have to create Text/Embedding/Chat completion objects with an `api_key`
or other previously required information (in the case of
AzureChatCompletion that means an `endpoint`, an `api_key`, a
`deployment_name`, and an `api_version`). When the AI connector is
created, the Pydantic settings are loaded either via env vars or the
fall-back `.env` file, and that means the user can create a chat
completion object like:
```python
chat_completion = OpenAIChatCompletion(service_id="test")
```
or, to optionally override the `ai_model_id` env var:
```python
chat_completion = OpenAIChatCompletion(service_id="test", ai_model_id="gpt-4-1106")
```
Note: we have left the ability to specific an `api_key`/`org_id` for
`OpenAIChatCompletion` or a `deployment_name`, `endpoint`, `base_url`,
and `api_version` for `AzureChatCompletion` as before, but if your
settings are configured to use env vars/.env file then there is no need
to pass this information.
<!-- Thank you for your contribution to the semantic-kernel repo!
Please help reviewers and future users, providing the following
information:
1. Why is this change required?
2. What problem does it solve?
3. What scenario does it contribute to?
4. If it fixes an open issue, please link to the issue here.
-->
### Description
The PR introduces the use of Pydantic settings and removes the use of
the python-dotenv library.
- Closes #1779
- Updates notebooks, samples, code and tests to remove the explicit
config of api_key or other previous .env files values.
- Adds new unit test config using monkeypatch to simulate env variables
for testing
- All unit and integration tests passing
<!-- Describe your changes, the overall approach, the underlying design.
These notes will help understanding how your code works. Thanks! -->
### Contribution Checklist
<!-- Before submitting this PR, please make sure: -->
- [X] The code builds clean without any errors or warnings
- [X] The PR follows the [SK Contribution
Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md)
and the [pre-submission formatting
script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts)
raises no violations
- [X] All unit tests pass, and I have added new tests where possible
- [ ] I didn't break anyone :smile:
2024-05-16 07:44:40 -04:00
" ai_model_id=\"gpt-35-turbo\",\n",
2024-05-07 16:05:00 -05:00
" max_tokens=2000,\n",
" temperature=0.7,\n",
" )\n",
"\n",
"prompt_template_config = PromptTemplateConfig(\n",
" template=prompt,\n",
" name=\"story\",\n",
" template_format=\"semantic-kernel\",\n",
" input_variables=[\n",
" InputVariable(name=\"input\", description=\"The user input\", is_required=True),\n",
" ],\n",
" execution_settings=execution_settings,\n",
")\n",
"\n",
"corgi_story = kernel.add_function(\n",
" function_name=\"CorgiStory\",\n",
" plugin_name=\"CorgiPlugin\",\n",
" prompt_template_config=prompt_template_config,\n",
")\n",
"\n",
"generate_number_plugin = kernel.add_plugin(GenerateNumberPlugin(), \"GenerateNumberPlugin\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2471c2ab",
"metadata": {},
"outputs": [],
"source": [
"# Run the number generator\n",
"generate_number_three_or_higher = generate_number_plugin[\"GenerateNumberThreeOrHigher\"]\n",
"number_result = await generate_number_three_or_higher(kernel, input=6)\n",
"print(number_result)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f043a299",
"metadata": {},
"outputs": [],
"source": [
"story = await corgi_story.invoke(kernel, input=number_result.value)"
]
},
{
"cell_type": "markdown",
"id": "7245e7a2",
"metadata": {},
"source": [
"_Note: depending on which model you're using, it may not respond with the proper number of paragraphs._\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "59a60e2a",
"metadata": {},
"outputs": [],
"source": [
"print(f\"Generating a corgi story exactly {number_result.value} paragraphs long.\")\n",
"print(\"=====================================================\")\n",
"print(story)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "8ef29d16",
"metadata": {},
"source": [
"## Kernel Functions with Annotated Parameters\n",
"\n",
"That works! But let's expand on our example to make it more generic.\n",
"\n",
"For the native function, we'll introduce the lower limit variable. This means that a user will input two numbers and the number generator function will pick a number between the first and second input.\n",
"\n",
"We'll make use of the Python's `Annotated` class to hold these variables.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d54983d8",
"metadata": {},
"outputs": [],
"source": [
2024-06-14 16:58:47 -04:00
"kernel.remove_all_services()\n",
2024-05-07 16:05:00 -05:00
"\n",
2024-06-14 16:58:47 -04:00
"service_id = None\n",
2024-05-07 16:05:00 -05:00
"if selectedService == Service.OpenAI:\n",
2024-06-14 16:58:47 -04:00
" from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion\n",
"\n",
" service_id = \"default\"\n",
" kernel.add_service(\n",
" OpenAIChatCompletion(\n",
" service_id=service_id,\n",
" ),\n",
" )\n",
"elif selectedService == Service.AzureOpenAI:\n",
2025-08-25 11:29:13 -07:00
" from azure.identity import AzureCliCredential\n",
"\n",
2024-06-14 16:58:47 -04:00
" from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion\n",
"\n",
" service_id = \"default\"\n",
" kernel.add_service(\n",
2025-08-25 11:29:13 -07:00
" AzureChatCompletion(service_id=service_id, credential=AzureCliCredential()),\n",
2024-06-14 16:58:47 -04:00
" )"
2024-05-07 16:05:00 -05:00
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "091f45e4",
"metadata": {},
"source": [
"Let's start with the native function. Notice that we're add the `@kernel_function` decorator that holds the name of the function as well as an optional description. The input parameters are configured as part of the function's signature, and we use the `Annotated` type to specify the required input arguments.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4ea462c2",
"metadata": {},
"outputs": [],
"source": [
2024-05-31 15:06:41 +02:00
"import sys\n",
2024-06-14 16:58:47 -04:00
"from typing import Annotated\n",
2024-05-07 16:05:00 -05:00
"\n",
"from semantic_kernel.functions import kernel_function\n",
"\n",
"\n",
"class GenerateNumberPlugin:\n",
" \"\"\"\n",
" Description: Generate a number between a min and a max.\n",
" \"\"\"\n",
"\n",
" @kernel_function(\n",
" name=\"GenerateNumber\",\n",
" description=\"Generate a random number between min and max\",\n",
" )\n",
" def generate_number(\n",
" self,\n",
" min: Annotated[int, \"the minimum number of paragraphs\"],\n",
" max: Annotated[int, \"the maximum number of paragraphs\"] = 10,\n",
" ) -> Annotated[int, \"the output is a number\"]:\n",
" \"\"\"\n",
" Generate a number between min-max\n",
" Example:\n",
" min=\"4\" max=\"10\" => rand(4,8)\n",
" Args:\n",
" min -- The lower limit for the random number generation\n",
" max -- The upper limit for the random number generation\n",
" Returns:\n",
" int value\n",
" \"\"\"\n",
" try:\n",
" return str(random.randint(min, max))\n",
" except ValueError as e:\n",
" print(f\"Invalid input {min} and {max}\")\n",
" raise e"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "48bcdf9e",
"metadata": {},
"outputs": [],
"source": [
"generate_number_plugin = kernel.add_plugin(GenerateNumberPlugin(), \"GenerateNumberPlugin\")\n",
"generate_number = generate_number_plugin[\"GenerateNumber\"]"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "6ad068d6",
"metadata": {},
"source": [
"Now let's also allow the semantic function to take in additional arguments. In this case, we're going to allow the our CorgiStory function to be written in a specified language. We'll need to provide a `paragraph_count` and a `language`.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8b8286fb",
"metadata": {},
"outputs": [],
"source": [
"prompt = \"\"\"\n",
"Write a short story about two Corgis on an adventure.\n",
"The story must be:\n",
"- G rated\n",
"- Have a positive message\n",
"- No sexism, racism or other bias/bigotry\n",
"- Be exactly {{$paragraph_count}} paragraphs long\n",
"- Be written in this language: {{$language}}\n",
"\"\"\"\n",
"\n",
"if selectedService == Service.OpenAI:\n",
" execution_settings = OpenAIChatPromptExecutionSettings(\n",
" service_id=service_id,\n",
2024-06-13 11:48:16 -04:00
" ai_model_id=\"gpt-3.5-turbo\",\n",
2024-05-07 16:05:00 -05:00
" max_tokens=2000,\n",
" temperature=0.7,\n",
" )\n",
"elif selectedService == Service.AzureOpenAI:\n",
2024-06-14 16:58:47 -04:00
" execution_settings = AzureChatPromptExecutionSettings(\n",
2024-05-07 16:05:00 -05:00
" service_id=service_id,\n",
Python: Introduce Pydantic settings (#6193)
### Motivation and Context
SK Python is tightly coupled to the use of a `.env` file to read all
secrets, keys, endpoints, and more. This doesn't scale well for users
who wish to be able to use environment variables with their SK
Applications. By introducing Pydantic Settings, it is possible to use
both environment variables as well as have a fall-back to a `.env` file
(via a `env_file_path` parameter), if desired.
By introducing Pydantic Settings, we are removing the requirement to
have to create Text/Embedding/Chat completion objects with an `api_key`
or other previously required information (in the case of
AzureChatCompletion that means an `endpoint`, an `api_key`, a
`deployment_name`, and an `api_version`). When the AI connector is
created, the Pydantic settings are loaded either via env vars or the
fall-back `.env` file, and that means the user can create a chat
completion object like:
```python
chat_completion = OpenAIChatCompletion(service_id="test")
```
or, to optionally override the `ai_model_id` env var:
```python
chat_completion = OpenAIChatCompletion(service_id="test", ai_model_id="gpt-4-1106")
```
Note: we have left the ability to specific an `api_key`/`org_id` for
`OpenAIChatCompletion` or a `deployment_name`, `endpoint`, `base_url`,
and `api_version` for `AzureChatCompletion` as before, but if your
settings are configured to use env vars/.env file then there is no need
to pass this information.
<!-- Thank you for your contribution to the semantic-kernel repo!
Please help reviewers and future users, providing the following
information:
1. Why is this change required?
2. What problem does it solve?
3. What scenario does it contribute to?
4. If it fixes an open issue, please link to the issue here.
-->
### Description
The PR introduces the use of Pydantic settings and removes the use of
the python-dotenv library.
- Closes #1779
- Updates notebooks, samples, code and tests to remove the explicit
config of api_key or other previous .env files values.
- Adds new unit test config using monkeypatch to simulate env variables
for testing
- All unit and integration tests passing
<!-- Describe your changes, the overall approach, the underlying design.
These notes will help understanding how your code works. Thanks! -->
### Contribution Checklist
<!-- Before submitting this PR, please make sure: -->
- [X] The code builds clean without any errors or warnings
- [X] The PR follows the [SK Contribution
Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md)
and the [pre-submission formatting
script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts)
raises no violations
- [X] All unit tests pass, and I have added new tests where possible
- [ ] I didn't break anyone :smile:
2024-05-16 07:44:40 -04:00
" ai_model_id=\"gpt-35-turbo\",\n",
2024-05-07 16:05:00 -05:00
" max_tokens=2000,\n",
" temperature=0.7,\n",
" )\n",
"\n",
"prompt_template_config = PromptTemplateConfig(\n",
" template=prompt,\n",
" name=\"summarize\",\n",
" template_format=\"semantic-kernel\",\n",
" input_variables=[\n",
" InputVariable(name=\"paragraph_count\", description=\"The number of paragraphs\", is_required=True),\n",
" InputVariable(name=\"language\", description=\"The language of the story\", is_required=True),\n",
" ],\n",
" execution_settings=execution_settings,\n",
")\n",
"\n",
"corgi_story = kernel.add_function(\n",
" function_name=\"CorgiStory\",\n",
" plugin_name=\"CorgiPlugin\",\n",
" prompt_template_config=prompt_template_config,\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "c8778bad",
"metadata": {},
"source": [
"Let's generate a paragraph count.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "28820d9d",
"metadata": {},
"outputs": [],
"source": [
"result = await generate_number.invoke(kernel, min=1, max=5)\n",
"num_paragraphs = result.value\n",
"print(f\"Generating a corgi story {num_paragraphs} paragraphs long.\")"
]
},
{
"cell_type": "markdown",
"id": "225a9147",
"metadata": {},
"source": [
"We can now invoke our corgi_story function using the `kernel` and the keyword arguments `paragraph_count` and `language`.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dbe07c4d",
"metadata": {},
"outputs": [],
"source": [
"# Pass the output to the semantic story function\n",
"desired_language = \"Spanish\"\n",
"story = await corgi_story.invoke(kernel, paragraph_count=num_paragraphs, language=desired_language)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6732a30b",
"metadata": {},
"outputs": [],
"source": [
"print(f\"Generating a corgi story {num_paragraphs} paragraphs long in {desired_language}.\")\n",
"print(\"=====================================================\")\n",
"print(story)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "fb786c54",
"metadata": {},
"source": [
"## Calling Native Functions within a Semantic Function\n",
"\n",
"One neat thing about the Semantic Kernel is that you can also call native functions from within Prompt Functions!\n",
"\n",
"We will make our CorgiStory semantic function call a native function `GenerateNames` which will return names for our Corgi characters.\n",
"\n",
"We do this using the syntax `{{plugin_name.function_name}}`. You can read more about our prompte templating syntax [here](../../../docs/PROMPT_TEMPLATE_LANGUAGE.md).\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d84c7d84",
"metadata": {},
"outputs": [],
"source": [
"from semantic_kernel.functions import kernel_function\n",
"\n",
"\n",
"class GenerateNamesPlugin:\n",
" \"\"\"\n",
" Description: Generate character names.\n",
" \"\"\"\n",
"\n",
" # The default function name will be the name of the function itself, however you can override this\n",
" # by setting the name=<name override> in the @kernel_function decorator. In this case, we're using\n",
" # the same name as the function name for simplicity.\n",
" @kernel_function(description=\"Generate character names\", name=\"generate_names\")\n",
" def generate_names(self) -> str:\n",
" \"\"\"\n",
" Generate two names.\n",
" Returns:\n",
" str\n",
" \"\"\"\n",
" names = {\"Hoagie\", \"Hamilton\", \"Bacon\", \"Pizza\", \"Boots\", \"Shorts\", \"Tuna\"}\n",
" first_name = random.choice(list(names))\n",
" names.remove(first_name)\n",
" second_name = random.choice(list(names))\n",
" return f\"{first_name}, {second_name}\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2ab7d65f",
"metadata": {},
"outputs": [],
"source": [
"generate_names_plugin = kernel.add_plugin(GenerateNamesPlugin(), plugin_name=\"GenerateNames\")\n",
"generate_names = generate_names_plugin[\"generate_names\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "94decd3e",
"metadata": {},
"outputs": [],
"source": [
"prompt = \"\"\"\n",
"Write a short story about two Corgis on an adventure.\n",
"The story must be:\n",
"- G rated\n",
"- Have a positive message\n",
"- No sexism, racism or other bias/bigotry\n",
"- Be exactly {{$paragraph_count}} paragraphs long\n",
"- Be written in this language: {{$language}}\n",
"- The two names of the corgis are {{GenerateNames.generate_names}}\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "be72a503",
"metadata": {},
"outputs": [],
"source": [
"if selectedService == Service.OpenAI:\n",
" execution_settings = OpenAIChatPromptExecutionSettings(\n",
" service_id=service_id,\n",
2024-06-13 11:48:16 -04:00
" ai_model_id=\"gpt-3.5-turbo\",\n",
2024-05-07 16:05:00 -05:00
" max_tokens=2000,\n",
" temperature=0.7,\n",
" )\n",
"elif selectedService == Service.AzureOpenAI:\n",
2024-06-14 16:58:47 -04:00
" execution_settings = AzureChatPromptExecutionSettings(\n",
2024-05-07 16:05:00 -05:00
" service_id=service_id,\n",
Python: Introduce Pydantic settings (#6193)
### Motivation and Context
SK Python is tightly coupled to the use of a `.env` file to read all
secrets, keys, endpoints, and more. This doesn't scale well for users
who wish to be able to use environment variables with their SK
Applications. By introducing Pydantic Settings, it is possible to use
both environment variables as well as have a fall-back to a `.env` file
(via a `env_file_path` parameter), if desired.
By introducing Pydantic Settings, we are removing the requirement to
have to create Text/Embedding/Chat completion objects with an `api_key`
or other previously required information (in the case of
AzureChatCompletion that means an `endpoint`, an `api_key`, a
`deployment_name`, and an `api_version`). When the AI connector is
created, the Pydantic settings are loaded either via env vars or the
fall-back `.env` file, and that means the user can create a chat
completion object like:
```python
chat_completion = OpenAIChatCompletion(service_id="test")
```
or, to optionally override the `ai_model_id` env var:
```python
chat_completion = OpenAIChatCompletion(service_id="test", ai_model_id="gpt-4-1106")
```
Note: we have left the ability to specific an `api_key`/`org_id` for
`OpenAIChatCompletion` or a `deployment_name`, `endpoint`, `base_url`,
and `api_version` for `AzureChatCompletion` as before, but if your
settings are configured to use env vars/.env file then there is no need
to pass this information.
<!-- Thank you for your contribution to the semantic-kernel repo!
Please help reviewers and future users, providing the following
information:
1. Why is this change required?
2. What problem does it solve?
3. What scenario does it contribute to?
4. If it fixes an open issue, please link to the issue here.
-->
### Description
The PR introduces the use of Pydantic settings and removes the use of
the python-dotenv library.
- Closes #1779
- Updates notebooks, samples, code and tests to remove the explicit
config of api_key or other previous .env files values.
- Adds new unit test config using monkeypatch to simulate env variables
for testing
- All unit and integration tests passing
<!-- Describe your changes, the overall approach, the underlying design.
These notes will help understanding how your code works. Thanks! -->
### Contribution Checklist
<!-- Before submitting this PR, please make sure: -->
- [X] The code builds clean without any errors or warnings
- [X] The PR follows the [SK Contribution
Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md)
and the [pre-submission formatting
script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts)
raises no violations
- [X] All unit tests pass, and I have added new tests where possible
- [ ] I didn't break anyone :smile:
2024-05-16 07:44:40 -04:00
" ai_model_id=\"gpt-35-turbo\",\n",
2024-05-07 16:05:00 -05:00
" max_tokens=2000,\n",
" temperature=0.7,\n",
" )\n",
"\n",
"prompt_template_config = PromptTemplateConfig(\n",
" template=prompt,\n",
" name=\"corgi-new\",\n",
" template_format=\"semantic-kernel\",\n",
" input_variables=[\n",
" InputVariable(name=\"paragraph_count\", description=\"The number of paragraphs\", is_required=True),\n",
" InputVariable(name=\"language\", description=\"The language of the story\", is_required=True),\n",
" ],\n",
" execution_settings=execution_settings,\n",
")\n",
"\n",
"corgi_story = kernel.add_function(\n",
" function_name=\"CorgiStoryUpdated\",\n",
" plugin_name=\"CorgiPluginUpdated\",\n",
" prompt_template_config=prompt_template_config,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "56e6cf0f",
"metadata": {},
"outputs": [],
"source": [
"result = await generate_number.invoke(kernel, min=1, max=5)\n",
"num_paragraphs = result.value"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7e980348",
"metadata": {},
"outputs": [],
"source": [
"desired_language = \"French\"\n",
"story = await corgi_story.invoke(kernel, paragraph_count=num_paragraphs, language=desired_language)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c4ade048",
"metadata": {},
"outputs": [],
"source": [
"print(f\"Generating a corgi story {num_paragraphs} paragraphs long in {desired_language}.\")\n",
"print(\"=====================================================\")\n",
"print(story)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "42f0c472",
"metadata": {},
"source": [
"### Recap\n",
"\n",
"A quick review of what we've learned here:\n",
"\n",
"- We've learned how to create native and prompt functions and register them to the kernel\n",
"- We've seen how we can use Kernel Arguments to pass in more custom variables into our prompt\n",
"- We've seen how we can call native functions within a prompt.\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
2023-05-22 12:49:24 -07:00
}