Agents - Creating and using agents in Semantic Kernel
- Azure AI Agent as Kernel Function
- Azure AI Agent with Auto Function Invocation Filter Streaming
- Azure AI Agent with Auto Function Invocation Filter
- Azure AI Agent with Azure AI Search
- Azure AI Agent with Bing Grounding Streaming with Message Callback
- Azure AI Agent with Bing Grounding
- Azure AI Agent with Code Interpreter Streaming with Message Callback
- Azure AI Agent Declarative with Azure AI Search
- Azure AI Agent Declarative with Bing Grounding
- Azure AI Agent Declarative with Code Interpreter
- Azure AI Agent Declarative with File Search
- Azure AI Agent Declarative with Function Calling From File
- Azure AI Agent Declarative with OpenAPI Interpreter
- Azure AI Agent Declarative with Existing Agent ID
- Azure AI Agent File Manipulation
- Azure AI Agent MCP Streaming
- Azure AI Agent Prompt Templating
- Azure AI Agent Message Callback Streaming
- Azure AI Agent Message Callback
- Azure AI Agent Retrieve Messages from Thread
- Azure AI Agent Streaming
- Azure AI Agent Structured Outputs
- Azure AI Agent Truncation Strategy
- Bedrock Agent Simple Chat Streaming
- Bedrock Agent Simple Chat
- Bedrock Agent With Code Interpreter Streaming
- Bedrock Agent With Code Interpreter
- Bedrock Agent With Kernel Function Simple
- Bedrock Agent With Kernel Function Streaming
- Bedrock Agent With Kernel Function
- Bedrock Agent Mixed Chat Agents Streaming
- Bedrock Agent Mixed Chat Agents
- Chat Completion Agent as Kernel Function
- Chat Completion Agent Function Termination
- Chat Completion Agent Message Callback Streaming
- Chat Completion Agent Message Callback
- Chat Completion Agent Templating
- Chat Completion Agent Streaming Token Usage
- Chat Completion Agent Summary History Reducer Agent Chat
- Chat Completion Agent Summary History Reducer Single Agent
- Chat Completion Agent Token Usage
- Chat Completion Agent Truncate History Reducer Agent Chat
- Chat Completion Agent Truncate History Reducer Single Agent
- Mixed Chat Agents Plugins
- Mixed Chat Agents
- Mixed Chat Files
- Mixed Chat Images
- Mixed Chat Reset
- Mixed Chat Streaming
- Azure OpenAI Assistant Declarative Code Interpreter
- Azure OpenAI Assistant Declarative File Search
- Azure OpenAI Assistant Declarative Function Calling From File
- Azure OpenAI Assistant Declarative Templating
- Azure OpenAI Assistant Declarative With Existing Agent ID
- OpenAI Assistant Auto Function Invocation Filter Streaming
- OpenAI Assistant Auto Function Invocation Filter
- OpenAI Assistant Chart Maker Streaming
- OpenAI Assistant Chart Maker
- OpenAI Assistant Declarative Code Interpreter
- OpenAI Assistant Declarative File Search
- OpenAI Assistant Declarative Function Calling From File
- OpenAI Assistant Declarative Templating
- OpenAI Assistant Declarative With Existing Agent ID
- OpenAI Assistant File Manipulation Streaming
- OpenAI Assistant File Manipulation
- OpenAI Assistant Retrieval
- OpenAI Assistant Message Callback Streaming
- OpenAI Assistant Message Callback
- OpenAI Assistant Streaming
- OpenAI Assistant Structured Outputs
- OpenAI Assistant Templating Streaming
- OpenAI Assistant Vision Streaming
- Azure OpenAI Responses Agent Declarative File Search
- Azure OpenAI Responses Agent Declarative Function Calling From File
- Azure OpenAI Responses Agent Declarative Templating
- OpenAI Responses Agent Declarative File Search
- OpenAI Responses Agent Declarative Function Calling From File
- OpenAI Responses Agent Declarative Web Search
- OpenAI Responses Binary Content Upload
- OpenAI Responses Message Callback Streaming
- OpenAI Responses Message Callback
- OpenAI Responses File Search Streaming
- OpenAI Responses Plugins Streaming
- OpenAI Responses Reuse Existing Thread ID
- OpenAI Responses Web Search Streaming
- Chat with Audio Input
- Chat with Audio Output
- Chat with Audio Input and Output
- Audio Player
- Audio Recorder
AutoFunctionCalling - Using Auto Function Calling to allow function call capable models to invoke Kernel Functions automatically
- Azure Python Code Interpreter Function Calling
- Function Calling with Required Type
- Parallel Function Calling
- Chat Completion with Auto Function Calling Streaming
- Functions Defined in JSON Prompt
- Chat Completion with Manual Function Calling Streaming
- Functions Defined in YAML Prompt
- Chat Completion with Auto Function Calling
- Chat Completion with Manual Function Calling
- Nexus Raven
ChatCompletion - Using ChatCompletion messaging capable service with models
- Simple Chatbot
- Simple Chatbot Kernel Function
- Simple Chatbot Logit Bias
- Simple Chatbot Store Metadata
- Simple Chatbot Streaming
- Simple Chatbot with Image
- Simple Chatbot with Summary History Reducer Keeping Function Content
- Simple Chatbot with Summary History Reducer
- Simple Chatbot with Truncation History Reducer
- Simple Chatbot with Summary History Reducer using Auto Reduce
- Simple Chatbot with Truncation History Reducer using Auto Reduce
ChatHistory - Using and serializing the ChatHistory
- Auto Function Invoke Filters
- Function Invocation Filters
- Function Invocation Filters Stream
- Prompt Filters
- Retry with Filters
Local Models - Using the OpenAI connector and OnnxGenAI connector to talk to models hosted locally in Ollama, OnnxGenAI, and LM Studio
- ONNX Chat Completion
- LM Studio Text Embedding
- LM Studio Chat Completion
- ONNX Phi3 Vision Completion
- Ollama Chat Completion
- ONNX Text Completion
Memory - Using Memory AI concepts
- Simple Memory
- Memory Data Models
- Memory with Pandas Dataframes
- Complex memory
- Full sample with Azure AI Search including function calling
Model-as-a-Service - Using models deployed as serverless APIs on Azure AI Studio to benchmark model performance against open-source datasets
On Your Data - Examples of using AzureOpenAI On Your Data
- Azure Chat GPT with Data API
- Azure Chat GPT with Data API Function Calling
- Azure Chat GPT with Data API Vector Search
Plugins - Different ways of creating and using Plugins
- Azure Key Vault Settings
- Azure Python Code Interpreter
- OpenAI Function Calling with Custom Plugin
- Plugins from Directory
Processes - Examples of using the Process Framework
PromptTemplates - Using Templates with parametrization for Prompt rendering
- Template Language
- Azure Chat GPT API Jinja2
- Load YAML Prompt
- Azure Chat GPT API Handlebars
- Configuring Prompts
Reasoning - Using ChatCompletion to reason with OpenAI Reasoning
Search - Using Search services information
Structured Outputs - How to leverage OpenAI's json_schema Structured Outputs functionality
TextGeneration - Using TextGeneration capable service with models
In Semantic Kernel for Python, we leverage Pydantic Settings to manage configurations for AI and Memory Connectors, among other components. Here’s a clear guide on how to configure your settings effectively:
-
Reading Environment Variables:
- Primary Source: Pydantic first attempts to read the required settings from environment variables.
-
Using a .env File:
- Fallback Source: If the required environment variables are not set, Pydantic will look for a
.envfile in the current working directory. - Custom Path (Optional): You can specify an alternative path for the
.envfile viaenv_file_path. This can be either a relative or an absolute path.
- Fallback Source: If the required environment variables are not set, Pydantic will look for a
-
Direct Constructor Input:
- As an alternative to environment variables and
.envfiles, you can pass the required settings directly through the constructor of the AI Connector or Memory Connector.
- As an alternative to environment variables and
To authenticate to your Azure resources, you must provide one of the following authentication methods to successfully authenticate:
- AsyncTokenCredential - provide one of the
AsyncTokenCredentialtypes (e.g.AzureCliCredential,ManagedIdentityCredential). More information here: Credentials for asynchronous Azure SDK clients. - Custom AsyncAzureOpenAI client - Pass a pre-configured client instance.
- Access Token (
ad_token) - Provide a valid Microsoft Entra access token directly. - Token Provider (
ad_token_provider) - Provide a callable that returns a valid access token. - API Key - Provide through an environment variable, a
.envfile, or the constructor.
To successfully retrieve and use the Entra Auth Token, you need the Cognitive Services OpenAI Contributor role assigned to your Azure OpenAI resource. By default, the https://cognitiveservices.azure.com token endpoint is used. You can override this endpoint by setting an environment variable .env variable as AZURE_OPENAI_TOKEN_ENDPOINT or by passing a new value to the AzureChatCompletion constructor as part of the AzureOpenAISettings.
- .env File Placement: We highly recommend placing the
.envfile in thesemantic-kernel/pythonroot directory. This is a common practice when developing in the Semantic Kernel repository.
By following these guidelines, you can ensure that your settings for various components are configured correctly, enabling seamless functionality and integration of Semantic Kernel in your Python projects.