Skip to content

issue: Duplicate instructions in tool selection calling prompt #19121

@matiboux

Description

@matiboux

Check Existing Issues

  • I have searched for any existing and/or related issues.
  • I have searched for any existing and/or related discussions.
  • I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!).
  • I am using the latest version of Open WebUI.

Installation Method

Git Clone

Open WebUI Version

v0.6.36

Ollama Version (if applicable)

No response

Operating System

Ubuntu 24.04 LTS (WSL2)

Browser (if applicable)

No response

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have provided every relevant configuration, setting, and environment variable used in my setup.
  • I have clearly listed every relevant configuration, custom setting, environment variable, and command-line option that influences my setup (such as Docker Compose overrides, .env values, browser settings, authentication configurations, etc).
  • I have documented step-by-step reproduction instructions that are precise, sequential, and leave nothing to interpretation. My steps:
  • Start with the initial platform/version/OS and dependencies used,
  • Specify exact install/launch/configure commands,
  • List URLs visited, user input (incl. example values/emails/passwords if needed),
  • Describe all options and toggles enabled or changed,
  • Include any files or environmental changes,
  • Identify the expected and actual result at each stage,
  • Ensure any reasonably skilled user can follow and hit the same issue.

Expected Behavior

When a tool is selected, the prompt sent to the model for tool selection and calling should be clean and not contain duplicate or unclear instructions.

Actual Behavior

When a tool is selected, the prompt sent to the model for tool selection contains duplicate and unclear instructions in the user message:

Query: History:
USER: """what time is it?"""
Query: what time is it?

Primarily, the "Query:" instruction is duplicated in "Query: History:" and "Query:".
This is because of how the user message is formatted in the tool selection request:

prompt = f"History:\n{chat_history}\nQuery: {user_message}"
user_message_content = f"Query: {prompt}"

Also, the query and last user message of the conversation is duplicated. The discussion is open as to whether this is a problem or not.

Steps to Reproduce

  1. Start Ubuntu 24.04 LTS in WSL2.
  2. Set up Open WebUI per documentation:
    • In one terminal: npm run dev to start the frontend.
    • In another terminal: run sh dev.sh from the backend/ directory to start the backend.
  3. In parallel, start a custom API that proxies OpenAI-compatible requests from the /chat/completions endpoint to a real OpenAI-compatible API, and whose only other role is to print out:
    • All request parameters
    • System and user messages
  4. In the Open WebUI interface, create the admin user.
  5. Go to admin settings and enter a connection setting pointing to the custom OpenAI-compatible proxy API.
  6. In Workspace, create a new tool using the default provided code.
  7. Create a new chat with any model and enable the tool just created.
  8. Type (or ask): "what time is it?"
  9. Inspect the printout in the custom API terminal window: observe that both system/context and prompt are duplicated in the API request.

Note: Alternatively, you could add logs in the backend to print the request parameters, but using a custom API ensures backend code is untouched for this test.

Logs & Screenshots

Below is the actual output from the custom API proxy used during testing. This show the user message in the tool selection request.

Tool Selection Request:

Messages since the last assistant response:
- system
```
Available Tools: [{"name": "calculator", "description": "\n        Calculate the result of an equation.\n        ", "parameters": {"properties": {"equation": {"description": "The mathematical equation to calculate.", "type": "string"}}, "required": ["equation"], "type": "object"}}, {"name": "get_current_time", "description": "\n        Get the current time in a more human-readable format.\n        ", "parameters": {"properties": {}, "type": "object"}}, {"name": "get_current_weather", "description": "\n        Get the current weather for a given city.\n        ", "parameters": {"properties": {"city": {"default": "New York, NY", "description": "Get the current weather for a given city.", "type": "string"}}, "type": "object"}}, {"name": "get_user_name_and_email_and_id", "description": "\n        Get the user name, Email and ID from the user object.\n        ", "parameters": {"properties": {}, "type": "object"}}]

Your task is to choose and return the correct tool(s) from the list of available tools based on the query. Follow these guidelines:

- Return only the JSON object, without any additional text or explanation.

- If no tools match the query, return an empty array: 
   {
     "tool_calls": []
   }

- If one or more tools match the query, construct a JSON response containing a "tool_calls" array with objects that include:
   - "name": The tool's name.
   - "parameters": A dictionary of required parameters and their corresponding values.

The format for the JSON response is strictly:
{
  "tool_calls": [
    {"name": "toolName1", "parameters": {"key1": "value1"}},
    {"name": "toolName2", "parameters": {"key2": "value2"}}
  ]
}
```
- user
```
Query: History:
USER: """what time is it?"""
Query: what time is it?
```

Additional Information

Related code responsible for the "Query:" prefix duplication:

prompt = f"History:\n{chat_history}\nQuery: {user_message}"
return {
"model": task_model_id,
"messages": [
{"role": "system", "content": content},
{"role": "user", "content": f"Query: {prompt}"},
],
"stream": False,
"metadata": {"task": str(TASKS.FUNCTION_CALLING)},
}

Related code responsible for the last user message duplication:

user_message = get_last_user_message(messages)
recent_messages = messages[-4:] if len(messages) > 4 else messages
chat_history = "\n".join(
f"{message['role'].upper()}: \"\"\"{get_content_from_message(message)}\"\"\""
for message in recent_messages
)
prompt = f"History:\n{chat_history}\nQuery: {user_message}"

I will create a PR to attempt to resolve this issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions