Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

README.md

Semantic Kernel Concepts by Feature

Table of Contents

Agents - Creating and using agents in Semantic Kernel

Audio - Using services that support audio-to-text and text-to-audio conversion

AutoFunctionCalling - Using Auto Function Calling to allow function call capable models to invoke Kernel Functions automatically

ChatCompletion - Using ChatCompletion messaging capable service with models

ChatHistory - Using and serializing the ChatHistory

Filtering - Creating and using Filters

Functions - Invoking Method or Prompt functions with Kernel

Grounding - An example of how to perform LLM grounding

Local Models - Using the OpenAI connector and OnnxGenAI connector to talk to models hosted locally in Ollama, OnnxGenAI, and LM Studio

Logging - Showing how to set up logging

Memory - Using Memory AI concepts

Model-as-a-Service - Using models deployed as serverless APIs on Azure AI Studio to benchmark model performance against open-source datasets

On Your Data - Examples of using AzureOpenAI On Your Data

Plugins - Different ways of creating and using Plugins

Processes - Examples of using the Process Framework

PromptTemplates - Using Templates with parametrization for Prompt rendering

RAG - Different ways of RAG (Retrieval-Augmented Generation)

Reasoning - Using ChatCompletion to reason with OpenAI Reasoning

Search - Using Search services information

Service Selector - Shows how to create and use a custom service selector class

Setup - How to set up environment variables for Semantic Kernel

Structured Outputs - How to leverage OpenAI's json_schema Structured Outputs functionality

TextGeneration - Using TextGeneration capable service with models

Configuring the Kernel

In Semantic Kernel for Python, we leverage Pydantic Settings to manage configurations for AI and Memory Connectors, among other components. Here’s a clear guide on how to configure your settings effectively:

Steps for Configuration

  1. Reading Environment Variables:

    • Primary Source: Pydantic first attempts to read the required settings from environment variables.
  2. Using a .env File:

    • Fallback Source: If the required environment variables are not set, Pydantic will look for a .env file in the current working directory.
    • Custom Path (Optional): You can specify an alternative path for the .env file via env_file_path. This can be either a relative or an absolute path.
  3. Direct Constructor Input:

    • As an alternative to environment variables and .env files, you can pass the required settings directly through the constructor of the AI Connector or Memory Connector.

Azure Authentication

To authenticate to your Azure resources, you must provide one of the following authentication methods to successfully authenticate:

  1. AsyncTokenCredential - provide one of the AsyncTokenCredential types (e.g. AzureCliCredential, ManagedIdentityCredential). More information here: Credentials for asynchronous Azure SDK clients.
  2. Custom AsyncAzureOpenAI client - Pass a pre-configured client instance.
  3. Access Token (ad_token) - Provide a valid Microsoft Entra access token directly.
  4. Token Provider (ad_token_provider) - Provide a callable that returns a valid access token.
  5. API Key - Provide through an environment variable, a .env file, or the constructor.

To successfully retrieve and use the Entra Auth Token, you need the Cognitive Services OpenAI Contributor role assigned to your Azure OpenAI resource. By default, the https://cognitiveservices.azure.com token endpoint is used. You can override this endpoint by setting an environment variable .env variable as AZURE_OPENAI_TOKEN_ENDPOINT or by passing a new value to the AzureChatCompletion constructor as part of the AzureOpenAISettings.

Best Practices

  • .env File Placement: We highly recommend placing the .env file in the semantic-kernel/python root directory. This is a common practice when developing in the Semantic Kernel repository.

By following these guidelines, you can ensure that your settings for various components are configured correctly, enabling seamless functionality and integration of Semantic Kernel in your Python projects.