-
Notifications
You must be signed in to change notification settings - Fork 4.4k
Description
I’m using function calling in the following way. This implementation is very simple and currently helps me a lot.
(service, settings) = kernel.get_llm_service()
settings.tool_choice = "auto"
settings.function_call_behavior = FunctionCallBehavior.AutoInvokeKernelFunctions()
result = await service.get_chat_message_contents(
chat_history=chat_history,
settings=settings,
kernel=kernel.kernel,
arguments=KernelArguments(settings=settings),
)
print(content = result[0].inner_content)However, in cases where I want to achieve the following use case, the generative AI ends up generating the final response each time, which significantly increases the processing time.
- Execute a function call for preprocessing to retrieve specific data in advance (the goal is to gather data from various unspecified sources).
- Execute a function call to invoke multiple APIs for auditing the retrieved data.
- Execute the final function call.
This is an extreme example, but the point is that the current approach is sufficient for generating responses at step 3. However, for more complex workflows, at steps 1 and 2, response generation is not necessary; only the external data obtained through function calls is needed.
In these cases, is it possible with the current functionality to use function calls at steps 1 and 2 without generating the final response, and at step 3, use function calls while also generating the final response with the generative AI?
(Should I use OpenAIChatCompletionBase._process_function_call?)
environment
- python 3.10
- semantic-kernel==1.0.3
Metadata
Metadata
Assignees
Labels
Type
Projects
Status