This project contains a step by step guide to get started with Semantic Kernel Agents in Python.
- For the use of Chat Completion agents, the minimum allowed Semantic Kernel pypi version is 1.3.0.
- For the use of OpenAI Assistant agents, the minimum allowed Semantic Kernel pypi version is 1.4.0.
- For the use of Agent Group Chat, the minimum allowed Semantic kernel pypi version is 1.6.0.
- For the use of Streaming OpenAI Assistant agents, the minimum allowed Semantic Kernel pypi version is 1.11.0
- For the use of OpenAI Responses agents, the minimum allowed Semantic Kernel pypi version is 1.27.0.
The getting started with agents examples include:
| Example | Description |
|---|---|
| step01_chat_completion_agent_simple | How to create and use a simple chat completion agent. |
| step02_chat_completion_agent_thread_management | How to create and use a chat completion with a thread. |
| step03_chat_completion_agent_with_kernel | How to create and use a a chat completion agent with the AI service created on the kernel. |
| step04_chat_completion_agent_plugin_simple | How to create a simple chat completion agent and specify plugins via the constructor with a kernel. |
| step05_chat_completion_agent_plugin_with_kernel | How to create and use a chat completion agent by registering plugins on the kernel. |
| step06_chat_completion_agent_group_chat | How to create a conversation between agents. |
| step07_kernel_function_strategies | How to utilize a KernelFunction as a chat strategy. |
| step08_chat_completion_agent_json_result | How to have an agent produce JSON. |
| step09_chat_completion_agent_logging | How to enable logging for agents. |
| step10_chat_completion_agent_structured_outputs | How to use have a chat completion agent use structured outputs |
| step11_chat_completion_agent_declarative | How to create a chat compltion agent from a declarative spec. |
| Example | Description |
|---|---|
| step1_azure_ai_agent | How to create an Azure AI Agent and invoke a Semantic Kernel plugin. |
| step2_azure_ai_agent_plugin | How to create an Azure AI Agent with plugins. |
| step3_azure_ai_agent_group_chat | How to create an agent group chat with Azure AI Agents. |
| step4_azure_ai_agent_code_interpreter | How to use the code-interpreter tool for an Azure AI agent. |
| step5_azure_ai_agent_file_search | How to use the file-search tool for an Azure AI agent. |
| step6_azure_ai_agent_openapi | How to use the Open API tool for an Azure AI agent. |
| step7_azure_ai_agent_retrieval | How to reference an existing Azure AI Agent. |
| step8_azure_ai_agent_declarative | How to create an Azure AI Agent from a declarative spec. |
Note: For details on configuring an Azure AI Agent, please see here.
| Example | Description |
|---|---|
| step1_assistant | How to create and use an OpenAI Assistant agent. |
| step2_assistant_plugins | How to create and use an OpenAI Assistant agent with plugins. |
| step3_assistant_vision | How to provide an image as input to an OpenAI Assistant agent. |
| step4_assistant_tool_code_interpreter | How to use the code-interpreter tool for an OpenAI Assistant agent. |
| step5_assistant_tool_file_search | How to use the file-search tool for an OpenAI Assistant agent. |
| step6_assistant | How to create an Assistant Agent from a declarative spec. |
| Example | Description |
|---|---|
| step1_responses_agent | How to create and use an OpenAI Responses agent in the most simple way. |
| step2_responses_agent_thread_management | How to create and use a ResponsesAgentThread agent to maintain conversation context. |
| step3_responses_agent_plugins | How to create and use an OpenAI Responses agent with plugins. |
| step4_responses_agent_web_search | How to use the web search preview tool with an OpenAI Responses agent. |
| step5_responses_agent_file_search | How to use the file-search tool with an OpenAI Responses agent. |
| step6_responses_agent_vision | How to provide an image as input to an OpenAI Responses agent. |
| step7_responses_agent_structured_outputs | How to use have an OpenAI Responses agent use structured outputs. |
| step8_assistant | How to create a Responses Agent from a declarative spec. |
| Example | Description |
|---|---|
| step1_concurrent | How to run agents in parallel on the same task. |
| step1a_concurrent_structure_output | How to run agents in parallel on the same task and return structured output. |
| step2_sequential | How to run agents in sequence to complete a task. |
| step2a_sequential_cancellation_token | How to cancel an invocation while it is in progress. |
| step3_group_chat | How to run agents in a group chat to complete a task. |
| step3a_group_chat_human_in_the_loop | How to run agents in a group chat with human in the loop. |
| step3b_group_chat_with_chat_completion_manager | How to run agents in a group chat with a more dynamic manager. |
| step4_handoff | How to run agents in a handoff orchestration to complete a task. |
| step4a_handoff_structure_input | How to run agents in a handoff orchestration to complete a task with structured input. |
| step5_magentic | How to run agents in a Magentic orchestration to complete a task. |
Similar to the Semantic Kernel Python concept samples, it is necessary to configure the secrets and keys used by the kernel. See the follow "Configuring the Kernel" guide for more information.
Concept samples can be run in an IDE or via the command line. After setting up the required api key for your AI connector, the samples run without any extra command line arguments.