Skip to content

Fix tool call for Qwen3.5#1300

Merged
ikawrakow merged 2 commits intoikawrakow:mainfrom
sayap:qwen3.5-tool-call
Feb 23, 2026
Merged

Fix tool call for Qwen3.5#1300
ikawrakow merged 2 commits intoikawrakow:mainfrom
sayap:qwen3.5-tool-call

Conversation

@sayap
Copy link
Contributor

@sayap sayap commented Feb 22, 2026

Loosely based on mainline changes from:

Also need to change the grammar to allow the model to make multiple tool calls in a row. This was likely broken for Qwen3 Coder prior to this commit.

Loosely based on mainline changes from:
* ggml-org/llama.cpp#19635
* ggml-org/llama.cpp#19765

Also need to change the grammar to allow the model to make multiple
tool calls in a row. This was likely broken for Qwen3 Coder prior to
this commit.

if (supports_reasoning && string_ends_with(data.prompt, "<think>\n")) {
if (!params.enable_thinking) {
data.prompt += "</think>";
Copy link

@MrHills-rs MrHills-rs Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still need to test it more but I have not had much success in no think mode by appending
<think>\n</think>. This might also be the fact that I'm using a small quant (IQ2_XS), but I did get repetition issues occasionally.

It's hard to replicate but I think I got better output by using <think>\n\n</think>\n\n as the original Jinja suggests.

https://huggingface.co/Qwen/Qwen3.5-397B-A17B/blob/main/chat_template.jinja

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I haven't looked at the backend too much, but isn't this logic supposed to be delt with by the Jinja?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am actually not too sure what's the purpose of the if-block here. I know the else-block is needed to set thinking_forced_open to true, so that thinking goes under reasoning_content. Let me double check.

Also I just pushed a commit that fixes the grammar to handle tool calls with multiple parameters correctly. It slipped through my testing..

Copy link

@MrHills-rs MrHills-rs Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If Jinja is being used and the context ends with "<think>\n" then it means that thinking should be on, so the code under if (!params.enable_thinking) should never run anyway.

If you're not using Jinja but rather formatting at front end level with text completion, the formatting should be handled by the front end, not llama.cpp.

That code might add a sneaky extra unwanted token to text completion front end programmers, depending on how params.enable_thinking is set. I don't see the point of it, but I'm not deep into backend so idk.

Copy link
Contributor Author

@sayap sayap Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If Jinja is being used and the context ends with "\n" then it means that thinking should be on, so the code under if (!params.enable_thinking) should never run anyway.

You are right. So if I understand correctly, the if-block here should be for chat templates like https://huggingface.co/stepfun-ai/Step-3.5-Flash/blob/main/chat_template.jinja, which don't support the enable_thinking flag by itself. The if-block will then add a "</think>", as a best effort to disable reasoning.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was Stepfun-3.5-Q4_K_L so not that small.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I downloaded the IQ4_XS quant from https://huggingface.co/ubergarm/Step-3.5-Flash-GGUF/, which has the same chat template as https://huggingface.co/stepfun-ai/Step-3.5-Flash/blob/main/chat_template.jinja

Thinking seems to be working fine. And with the change in this PR, I can now set the "enable_thinking": false chat template kwargs, either as a llama-server cli flag or in the request body, to disable thinking.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I should have downloaded ubergarm's quant, heh. Maybe I will just steal the template out of his metadata.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Stepfun-3.5-Q4_K_L quant is from Bartowski right? Looking at https://huggingface.co/bartowski/stepfun-ai_Step-3.5-Flash-GGUF/edit/main/stepfun-ai_Step-3.5-Flash-Q4_K_L/stepfun-ai_Step-3.5-Flash-Q4_K_L-00001-of-00004.gguf, the chat template is also identical with the official one from stepfun-ai, so it should be working fine with this PR branch.

This is how it looks like with a simple Q&A without tool:

$ ./build/bin/llama-server --host 127.0.0.1 --port 8080 \
  -m /tank/models/ubergarm/Step-3.5-Flash-GGUF/IQ4_XS/Step-3.5-Flash-IQ4_XS-00001-of-00004.gguf \
  -ngl 999 -cmoe --no-mmap -c 1024 \
  --jinja --chat-template-file /tank/models/stepfun-ai/Step-3.5-Flash/chat_template.jinja

...

$ curl -s 127.0.0.1:8080/v1/chat/completions -d '{"messages":[{"role": "user", "content": "who are you?"}]}' | jq .
{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "role": "assistant",
        "reasoning_content": "Hmm, the user is asking a straightforward \"who are you?\" question. This is a common initial query when someone encounters an AI assistant for the first time. \n\nI should provide a clear and concise introduction that establishes my identity, capabilities, and limitations. The user likely wants to understand what they can expect from interacting with me. \n\nI'll start by stating my name and developer, then summarize my core functions in a simple list format for readability. It's important to mention both what I can do and what I cannot do (like accessing real-time data) to set proper expectations. \n\nThe tone should be friendly and welcoming while remaining professional. I'll end with an invitation for the user to ask questions, encouraging further interaction.\n",
        "content": "I'm **Step**, a large language model developed by **StepFun** (also known as 阶跃星辰).  \n\nI’m designed to understand and generate natural language, perform logical and visual reasoning, answer questions, assist with creative writing, coding, math, document analysis, and more. My goal is to be **honest, helpful, respectful of privacy**, and to provide positive and reliable interactions.\n\nI’m **not affiliated** with other companies or models such as OpenAI’s ChatGPT, Anthropic’s Claude, Baidu’s 文心一言, or Alibaba’s 通义千问 — I’m independently developed by StepFun.\n\nHow can I help you today? 😊"
      }
    }
  ],
  "created": 1771809983,
  "model": "Step-3.5-Flash-IQ4_XS-00001-of-00004.gguf",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 296,
    "prompt_tokens": 16,
    "total_tokens": 312
  },
  ...
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And this is how it looks like with thinking disabled in the request body:

$ curl -s 127.0.0.1:8080/v1/chat/completions -d '{"messages":[{"role": "user", "content": "who are you?"}], "chat_template_kwargs": {"enable_thinking": false}}' | jq .
{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! I am Step, a large language model developed by StepFun. I'm designed to understand and generate natural language, perform logical and mathematical reasoning, and assist with a wide range of tasks — from answering questions to creative writing, coding, and more. I aim to be helpful, honest, and respectful while maintaining a friendly and positive tone. If you have any questions or need assistance, feel free to ask!"
      }
    }
  ],
  "created": 1771810347,
  "model": "Step-3.5-Flash-IQ4_XS-00001-of-00004.gguf",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 86,
    "prompt_tokens": 17,
    "total_tokens": 103
  },
  ...
}

The prompt tokens went from 16 to 17, as this particular line adds the </think> token to the prompt.

@ikawrakow ikawrakow merged commit dcf50d8 into ikawrakow:main Feb 23, 2026
abc-nix pushed a commit to abc-nix/ik_llama.cpp that referenced this pull request Feb 26, 2026
* Fix tool call for Qwen3.5

Loosely based on mainline changes from:
* ggml-org/llama.cpp#19635
* ggml-org/llama.cpp#19765

Also need to change the grammar to allow the model to make multiple
tool calls in a row. This was likely broken for Qwen3 Coder prior to
this commit.

* Fix the grammar for the subsequent parameters after the first one
abc-nix pushed a commit to abc-nix/ik_llama.cpp that referenced this pull request Feb 26, 2026
* Better estimate for max. nuber of compute nodes

* Just in case

server: fix crash from adaptive p (ikawrakow#1304)

Co-authored-by: firecoperana <firecoperana>

Fix tool call for Qwen3.5 (ikawrakow#1300)

* Fix tool call for Qwen3.5

Loosely based on mainline changes from:
* ggml-org/llama.cpp#19635
* ggml-org/llama.cpp#19765

Also need to change the grammar to allow the model to make multiple
tool calls in a row. This was likely broken for Qwen3 Coder prior to
this commit.

* Fix the grammar for the subsequent parameters after the first one

Graph parallel for Qwen3-Next (ikawrakow#1292)

* WIP

* This works, but is slower than split mode layer

Fix llm_arch_is_hybrid (ikawrakow#1305)

Fix max nodes (again) (ikawrakow#1306)

Fix typo in merge-up-gate-experts argument (ikawrakow#1311)

llama-quantize: --dry-run option (ikawrakow#1309)

Slightly better graph parallel for Qwen3-Next (ikawrakow#1307)

* Make sure we pick the reduced tensor from the right GPU

* Minor

Minor delta-net tweak (ikawrakow#1308)

* Make sure we pick the reduced tensor from the right GPU

* Minor

* Minor delta-net tweak

adaptive p: collect probability before logit bias (ikawrakow#1314)

server: propagate task index to response objects for batch requests (ikawrakow#1303)

When multiple prompts are sent in a single /v1/completions request,
each response needs to carry the correct index so the client can
match results to their corresponding prompts. The index field was
not being set on partial responses, final responses, or embedding
responses, causing batch results to all report index 0.

Set res->index = slot.task->index in send_partial_response,
send_final_response, and send_embedding.

Generated with [Devin](https://cli.devin.ai/docs)

Co-authored-by: Joshua Jolley <jjolley@clearwateranalytics.com>
Co-authored-by: Devin <noreply@cognition.ai>

Llama-quantize: Partial requant feature (ikawrakow#1313)

* Partial Requant feature for llama-quantize

- Inspired by the recently portcopied --dry-run feature.
- Allows to partially requantize a split quantized .gguf by requantizing only the missing splits in the destination directory.
- Works both for GGUF which are split tensors by tensors, or by group of several tensors (though this one is not very much tested beyond 2 tensors by split).
- Vibe coded.

* Create output directory if it doesn't exist in llama-quantize

* Create output directory if it doesn't exist in gguf-split

* Add exit when directory fails to be created on Windows

* Use std::filesystem

* cleanup

Display the size of the tensors overriden during the tensor loading (ikawrakow#1318)

* Display the size of the tensors overriden during the tensor loading

Ex:

`Tensor blk.60.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.60.ffn_up_exps.weight buffer type overriden to CPU`

become

`Tensor blk.60.ffn_up_exps.weight (size = 668467200 bytes) buffer type overriden to CPU
Tensor blk.60.ffn_gate_exps.weight (size = 668467200 bytes) buffer type overriden to CPU`

And pass in debug the later displayed size of the unnamed buffer overrides.

Ex : `llm_load_tensors:        CPU buffer size =   XXX.XX MiB`

That double display is cluttering the screen without being very informative.

* change bytes display to MiB.

Co-authored-by: Kawrakow <iwankawrakow@gmail.com>

---------

Co-authored-by: Kawrakow <iwankawrakow@gmail.com>

Fused delta-net (ikawrakow#1315)

* Revive fused delta-net

* Add command line argument for fused delta net

* Simplify/improve CUDA delta-net

* Add -fdn to llama-bench

* More CUDA fused delta net optimizations

* CPU optimizations

* Much faster fused delta-net on the CPU

It seems it is faster than the chunked implementation!

* Change meaning of fdn from bool flag to threshold value

* Use eps = 1e-6

* Give some nodes a name

Fix KT quantization yet again (ikawrakow#1321)

* Fix KT quantization yet again

* Add same 1e-16f check for all quants in iqk_uantize.cpp

* Fixes for k-quants

* Also this one

server: enable checkpoint for recurrent models (ikawrakow#1310)

* server: enable checkpoint for recurrent models

create checkpoint after cancel

fix ban string and rm context during rewind

add checkpoint interval

only save recurrent cache

* save checkpoint during pp

---------

Co-authored-by: firecoperana <firecoperana>

Faster quantization for MoE models with many experts (ikawrakow#1322)

Fused delta net 2 (ikawrakow#1320)

* Revive fused delta-net

* Add command line argument for fused delta net

* Simplify/improve CUDA delta-net

* Add -fdn to llama-bench

* More CUDA fused delta net optimizations

* CPU optimizations

* Much faster fused delta-net on the CPU

It seems it is faster than the chunked implementation!

* Change meaning of fdn from bool flag to threshold value

* Use eps = 1e-6

* Give some nodes a name

* Don't re-apply L2 norm - it has already been done

* This seems quite a bit better

* More tweaks

* Restore per context buffer size log

Not everybody uses models split in 2000 parts, and those who do,
actually want to see the biffer sizes.

iAdding support for dense Qwen-3.5 models (ikawrakow#1326)

add directio to llama-bench
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants