-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Closed
Labels
pythonPull requests for the Python Semantic KernelPull requests for the Python Semantic Kernel
Description
Describe the bug
Essentially, I want to get a clear path on how to handle different AI service, so that I could use gpt4 for semantic function A and gpt3.5 for semantif function B.
I tried a bit by having the single kernel and specifcying the service_id in the request setting. But it still uses the original default service.
To Reproduce
Steps to reproduce the behavior:
- Adding service and set default
kernel.add_chat_service(
"azure_openai_chat35_service",
sk_oai.AzureChatCompletion(
deployment_name=chat35_api_config.deployment_model_id,
api_key=chat35_api_config.key,
endpoint=chat35_api_config.endpoint,
),
)
kernel.add_chat_service(
"azure_openai_chat4_service",
sk_oai.AzureChatCompletion(
deployment_name=chat4_api_config.deployment_model_id,
api_key=chat4_api_config.key,
endpoint=chat4_api_config.endpoint,
),
)
kernel.set_default_chat_service("azure_openai_chat35_service")
- Calling with settings
refactor_code = self.get_semantic_function(kernel, "skills", "Generate")
settings = AzureChatRequestSettings(
service_id="azure_openai_chat4_service", ai_model_id="gpt4-32k"
)
response = await refactor_code.invoke_async(context=context, settings=settings)
- It still uses 35 service to call
Expected behavior
chat4_service instead should be used to call
Screenshots
If applicable, add screenshots to help explain your problem.
Platform
- OS: Windows
- IDE: VS Code
- Language: Python
- Source: 0.4.6dev
Additional context
Add any other context about the problem here.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
pythonPull requests for the Python Semantic KernelPull requests for the Python Semantic Kernel
Type
Projects
Status
Sprint: Done