Down below you can find the code snippets that demonstrate the usage of many Semantic Kernel features.
You can run those tests using the IDE or the command line. To run the tests using the command line run the following command from the root of Concepts project:
dotnet test -l "console;verbosity=detailed" --filter "FullyQualifiedName=NameSpace.TestClass.TestMethod"
Example for ChatCompletion/OpenAI_ChatCompletion.cs file, targeting the ChatPromptSync test:
dotnet test -l "console;verbosity=detailed" --filter "FullyQualifiedName=ChatCompletion.OpenAI_ChatCompletion.ChatPromptAsync"Agents - Different ways of using Agents
- ComplexChat_NestedShopper
- MixedChat_Agents
- OpenAIAssistant_ChartMaker
- ChatCompletion_Rag: Shows how to easily add RAG to an agent
- ChatCompletion_Mem0: Shows how to add memory to an agent using mem0
- ChatCompletion_Whiteboard: Shows how to add short term Whiteboarding memory to an agent
- ChatCompletion_ContextualFunctionSelection: Shows how to add contextual function selection capabilities to an agent
AudioToText - Different ways of using AudioToText services to extract text from audio
- FunctionCalling
- FunctionCalling_ReturnMetadata
- Gemini_FunctionCalling
- AzureAIInference_FunctionCalling
- NexusRaven_HuggingFaceTextGeneration
- MultipleFunctionsVsParameters
- FunctionCalling_SharedState
ChatCompletion - Examples using ChatCompletion messaging capable service with models
- AzureAIInference_ChatCompletion
- AzureAIInference_ChatCompletionStreaming
- AzureOpenAI_ChatCompletion
- AzureOpenAI_ChatCompletionWithReasoning
- AzureOpenAI_ChatCompletionStreaming
- AzureOpenAI_CustomClient
- AzureOpenAIWithData_ChatCompletion
- ChatHistoryAuthorName
- ChatHistoryInFunctions
- ChatHistorySerialization
- Connectors_CustomHttpClient
- Connectors_KernelStreaming
- Connectors_WithMultipleLLMs
- Google_GeminiChatCompletion
- Google_GeminiChatCompletionStreaming
- Google_GeminiChatCompletionWithThinkingBudget
- Google_GeminiChatCompletionWithFile.cs
- Google_GeminiGetModelResult
- Google_GeminiStructuredOutputs
- Google_GeminiVision
- HuggingFace_ChatCompletion
- HuggingFace_ChatCompletionStreaming
- HybridCompletion_Fallback
- LMStudio_ChatCompletion
- LMStudio_ChatCompletionStreaming
- MistralAI_ChatCompletion
- MistralAI_ChatPrompt
- MistralAI_FunctionCalling
- MistralAI_StreamingFunctionCalling
- MultipleProviders_ChatHistoryReducer
- Ollama_ChatCompletion
- Ollama_ChatCompletionStreaming
- Ollama_ChatCompletionWithVision
- Onnx_ChatCompletion
- Onnx_ChatCompletionStreaming
- OpenAI_ChatCompletion
- OpenAI_ChatCompletionStreaming
- OpenAI_ChatCompletionWebSearch
- OpenAI_ChatCompletionWithAudio
- OpenAI_ChatCompletionWithFile
- OpenAI_ChatCompletionWithReasoning
- OpenAI_ChatCompletionWithVision
- OpenAI_CustomClient
- OpenAI_FunctionCalling
- OpenAI_FunctionCallingWithMemoryPlugin
- OpenAI_ReasonedFunctionCalling
- OpenAI_RepeatedFunctionCalling
- OpenAI_StructuredOutputs
- OpenAI_UsingLogitBias
- AutoFunctionInvocationFiltering
- FunctionInvocationFiltering
- MaxTokensWithFilters
- PIIDetection
- PromptRenderFiltering
- RetryWithFilters
- TelemetryWithFilters
- AzureOpenAI_DeploymentSwitch
- Arguments
- FunctionResult_Metadata
- FunctionResult_StronglyTyped
- MethodFunctions
- MethodFunctions_Advanced
- MethodFunctions_Types
- MethodFunctions_Yaml
- PromptFunctions_Inline
- PromptFunctions_MultipleArguments
ImageToText - Using ImageToText services to describe images
Memory - Using AI Memory concepts
- AWSBedrock_EmbeddingGeneration
- OpenAI_EmbeddingGeneration
- Ollama_EmbeddingGeneration
- Onnx_EmbeddingGeneration
- HuggingFace_EmbeddingGeneration
- TextChunkerUsage
- TextChunkingAndEmbedding
- VectorStore_DataIngestion_Simple: A simple example of how to do data ingestion into a vector store when getting started.
- VectorStore_DataIngestion_MultiStore: An example of data ingestion that uses the same code to ingest into multiple vector stores types.
- VectorStore_DataIngestion_CustomMapper: An example that shows how to use a custom mapper for when your data model and storage model doesn't match.
- VectorStore_VectorSearch_Simple: A simple example of how to do data ingestion into a vector store and then doing a vector similarity search over the data.
- VectorStore_VectorSearch_Paging: An example showing how to do vector search with paging.
- VectorStore_VectorSearch_MultiVector: An example showing how to pick a target vector when doing vector search on a record that contains multiple vectors.
- VectorStore_VectorSearch_MultiStore_Common: An example showing how to write vector database agnostic code with different vector databases.
- VectorStore_HybridSearch_Simple_AzureAISearch: An example showing how to do hybrid search using AzureAISearch.
- VectorStore_DynamicDataModel_Interop: An example that shows how you can use dynamic data modeling from Semantic Kernel to read and write to a Vector Store.
- VectorStore_ConsumeFromMemoryStore_AzureAISearch: An example that shows how you can use the AzureAISearchVectorStore to consume data that was ingested using the AzureAISearchMemoryStore.
- VectorStore_ConsumeFromMemoryStore_Qdrant: An example that shows how you can use the QdrantVectorStore to consume data that was ingested using the QdrantMemoryStore.
- VectorStore_ConsumeFromMemoryStore_Redis: An example that shows how you can use the RedisVectorStore to consume data that was ingested using the RedisMemoryStore.
- VectorStore_Langchain_Interop: An example that shows how you can use various Vector Store to consume data that was ingested using Langchain.
Plugins - Different ways of creating and using Plugins
- ApiManifestBasedPlugins
- ConversationSummaryPlugin
- CreatePluginFromOpenApiSpec_Github
- CreatePluginFromOpenApiSpec_Jira
- CreatePluginFromOpenApiSpec_Klarna
- CreatePluginFromOpenApiSpec_RepairService
- CreatePromptPluginFromDirectory
- CrewAI_Plugin
- OpenApiPlugin_PayloadHandling
- OpenApiPlugin_CustomHttpContentReader
- OpenApiPlugin_Customization
- OpenApiPlugin_Filtering
- OpenApiPlugin_Telemetry
- OpenApiPlugin_RestApiOperationResponseFactory
- CustomMutablePlugin
- DescribeAllPluginsAndFunctions
- GroundednessChecks
- ImportPluginFromGrpc
- MsGraph_CalendarPlugin
- MsGraph_EmailPlugin
- MsGraph_ContactsPlugin
- MsGraph_DrivePlugin
- MsGraph_TasksPlugin
- TransformPlugin
- CopilotAgentBasedPlugins
- WebPlugins
PromptTemplates - Using Templates with parametrization for Prompt rendering
- ChatCompletionPrompts
- ChatLoopWithPrompt
- ChatPromptWithAudio
- ChatPromptWithBinary
- ChatWithPrompts
- HandlebarsPrompts
- HandlebarsVisionPrompts
- LiquidPrompts
- MultiplePromptTemplates
- PromptFunctionsWithChatGPT
- PromptyFunction
- SafeChatPrompts
- TemplateLanguage
TextGeneration - TextGeneration capable service with models
TextToAudio - Using TextToAudio services to generate audio
TextToImage - Using TextToImage services to generate images
Concept samples will require secrets and credentials, to access OpenAI, Azure OpenAI, Bing and other resources.
We suggest using .NET Secret Manager to avoid the risk of leaking secrets into the repository, branches and pull requests. You can also use environment variables if you prefer.
To set your secrets with Secret Manager:
cd dotnet/src/samples/Concepts
dotnet user-secrets init
dotnet user-secrets set "OpenAI:ServiceId" "gpt-3.5-turbo-instruct"
dotnet user-secrets set "OpenAI:ModelId" "gpt-3.5-turbo-instruct"
dotnet user-secrets set "OpenAI:ChatModelId" "gpt-4"
dotnet user-secrets set "OpenAI:ApiKey" "..."
...
- Create a
appsettings.Development.jsonfile next to theConcepts.csprojfile. This file will be ignored by git, the content will not end up in pull requests, so it's safe for personal settings. Keep the file safe. - Edit
appsettings.Development.jsonand set the appropriate configuration for the samples you are running.
For example:
{
"OpenAI": {
"ServiceId": "gpt-3.5-turbo-instruct",
"ModelId": "gpt-3.5-turbo-instruct",
"ChatModelId": "gpt-4",
"ApiKey": "sk-...."
},
"AzureOpenAI": {
"ServiceId": "azure-gpt-35-turbo-instruct",
"DeploymentName": "gpt-35-turbo-instruct",
"ChatDeploymentName": "gpt-4",
"Endpoint": "https://contoso.openai.azure.com/",
"ApiKey": "...."
}
// etc.
}You may also set the settings in your environment variables. The environment variables will override the settings in the appsettings.Development.json file.
When setting environment variables, use a double underscore (i.e. "__") to delineate between parent and child properties. For example:
-
bash:
export OpenAI__ApiKey="sk-...." export AzureOpenAI__ApiKey="...." export AzureOpenAI__DeploymentName="gpt-35-turbo-instruct" export AzureOpenAI__ChatDeploymentName="gpt-4" export AzureOpenAIEmbeddings__DeploymentName="azure-text-embedding-ada-002" export AzureOpenAI__Endpoint="https://contoso.openai.azure.com/" export HuggingFace__ApiKey="...." export Bing__ApiKey="...." export Postgres__ConnectionString="...."
-
PowerShell:
$env:OpenAI__ApiKey = "sk-...." $env:AzureOpenAI__ApiKey = "...." $env:AzureOpenAI__DeploymentName = "gpt-35-turbo-instruct" $env:AzureOpenAI__ChatDeploymentName = "gpt-4" $env:AzureOpenAIEmbeddings__DeploymentName = "azure-text-embedding-ada-002" $env:AzureOpenAI__Endpoint = "https://contoso.openai.azure.com/" $env:HuggingFace__ApiKey = "...." $env:Bing__ApiKey = "...." $env:Postgres__ConnectionString = "...."