Fix Xiaomi MiMo reasoning-only completions#60304
Fix Xiaomi MiMo reasoning-only completions#60304HiddenPuppy wants to merge 2 commits intoopenclaw:mainfrom
Conversation
Greptile SummaryThis PR adds a Xiaomi-specific stream compatibility wrapper in Confidence Score: 5/5Safe to merge — no P0/P1 issues, all remaining findings are minor style suggestions. The core normalization logic is correct and well-guarded. Both the iterator and result() rewrite paths are exercised by tests. The only open items are a stylistic early-break asymmetry and missing direct coverage of mimo-v2-omni, neither of which affects runtime behavior. No files require special attention. Prompt To Fix All With AIThis is a comment left during a code review.
Path: extensions/xiaomi/stream.ts
Line: 62-65
Comment:
**Early `break` may shadow `hasToolCalls` detection**
The `break` fires as soon as any renderable text block is found, so subsequent `toolCall` blocks are never visited — `hasToolCalls` stays `false` for those iterations. Correctness is preserved because `hasRenderableText = true` already causes the function to bail on line 78, but the loop's three booleans read as symmetric flags when they aren't. If the loop body ever grows (e.g. another bypass condition), this implicit ordering could silently create a bug. Consider removing the `break` or adding a comment explaining that only `hasRenderableText` needs to short-circuit.
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: extensions/xiaomi/index.test.ts
Line: 62-104
Comment:
**`mimo-v2-omni` not covered by tests**
The `XIAOMI_REASONING_AS_TEXT_MODEL_IDS` Set lists both `mimo-v2-pro` and `mimo-v2-omni` as target models, but the tests only exercise `mimo-v2-pro`. Adding a parallel case for `mimo-v2-omni` (or parameterizing the existing test) would lock in the full set of affected models against future `XIAOMI_REASONING_AS_TEXT_MODEL_IDS` edits.
How can I resolve this? If you propose a fix, please make it concise.Reviews (1): Last reviewed commit: "Normalize Xiaomi reasoning-only completi..." | Re-trigger Greptile |
| ) { | ||
| hasRenderableText = true; | ||
| break; | ||
| } |
There was a problem hiding this comment.
Early
break may shadow hasToolCalls detection
The break fires as soon as any renderable text block is found, so subsequent toolCall blocks are never visited — hasToolCalls stays false for those iterations. Correctness is preserved because hasRenderableText = true already causes the function to bail on line 78, but the loop's three booleans read as symmetric flags when they aren't. If the loop body ever grows (e.g. another bypass condition), this implicit ordering could silently create a bug. Consider removing the break or adding a comment explaining that only hasRenderableText needs to short-circuit.
Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/xiaomi/stream.ts
Line: 62-65
Comment:
**Early `break` may shadow `hasToolCalls` detection**
The `break` fires as soon as any renderable text block is found, so subsequent `toolCall` blocks are never visited — `hasToolCalls` stays `false` for those iterations. Correctness is preserved because `hasRenderableText = true` already causes the function to bail on line 78, but the loop's three booleans read as symmetric flags when they aren't. If the loop body ever grows (e.g. another bypass condition), this implicit ordering could silently create a bug. Consider removing the `break` or adding a comment explaining that only `hasRenderableText` needs to short-circuit.
How can I resolve this? If you propose a fix, please make it concise.| it("normalizes reasoning-only final assistant messages into text for MiMo reasoning models", async () => { | ||
| const stream = runWrappedXiaomiStream({ | ||
| modelId: "mimo-v2-pro", | ||
| events: [ | ||
| { | ||
| type: "done", | ||
| reason: "stop", | ||
| message: { | ||
| role: "assistant", | ||
| content: [{ type: "thinking", thinking: "MiMo final answer" }], | ||
| stopReason: "stop", | ||
| }, | ||
| }, | ||
| ], | ||
| resultMessage: { | ||
| role: "assistant", | ||
| content: [{ type: "thinking", thinking: "MiMo final answer" }], | ||
| stopReason: "stop", | ||
| }, | ||
| }); | ||
|
|
||
| const events: unknown[] = []; | ||
| for await (const event of stream) { | ||
| events.push(event); | ||
| } | ||
|
|
||
| await expect(stream.result()).resolves.toEqual({ | ||
| role: "assistant", | ||
| content: [{ type: "text", text: "MiMo final answer" }], | ||
| stopReason: "stop", | ||
| }); | ||
| expect(events).toEqual([ | ||
| { | ||
| type: "done", | ||
| reason: "stop", | ||
| message: { | ||
| role: "assistant", | ||
| content: [{ type: "text", text: "MiMo final answer" }], | ||
| stopReason: "stop", | ||
| }, | ||
| }, | ||
| ]); | ||
| }); |
There was a problem hiding this comment.
mimo-v2-omni not covered by tests
The XIAOMI_REASONING_AS_TEXT_MODEL_IDS Set lists both mimo-v2-pro and mimo-v2-omni as target models, but the tests only exercise mimo-v2-pro. Adding a parallel case for mimo-v2-omni (or parameterizing the existing test) would lock in the full set of affected models against future XIAOMI_REASONING_AS_TEXT_MODEL_IDS edits.
Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/xiaomi/index.test.ts
Line: 62-104
Comment:
**`mimo-v2-omni` not covered by tests**
The `XIAOMI_REASONING_AS_TEXT_MODEL_IDS` Set lists both `mimo-v2-pro` and `mimo-v2-omni` as target models, but the tests only exercise `mimo-v2-pro`. Adding a parallel case for `mimo-v2-omni` (or parameterizing the existing test) would lock in the full set of affected models against future `XIAOMI_REASONING_AS_TEXT_MODEL_IDS` edits.
How can I resolve this? If you propose a fix, please make it concise.|
Codex review: needs changes before merge. Summary Reproducibility: yes. The linked Xiaomi report gives concrete setup steps and Next step before merge Security Review findings
Review detailsBest possible solution: Keep the fix in the Xiaomi plugin boundary: repair or replace the PR on current main, use the public async plugin test helper, add changelog coverage, and avoid changing the generic OpenAI-compatible parser or non-Xiaomi providers. Do we have a high-confidence way to reproduce the issue? Yes. The linked Xiaomi report gives concrete setup steps and Is this the best way to solve the issue? Mostly yes. A Xiaomi-owned wrapper for the two affected MiMo models is the narrow maintainable direction, but this PR is not mergeable as submitted until the stale test helper usage and changelog gap are fixed. Full review comments:
Overall correctness: patch is incorrect Acceptance criteria:
What I checked:
Likely related people:
Remaining risk / open question:
Codex review notes: model gpt-5.5, reasoning high; reviewed against cc8a8f1df1cd. |
Summary
mimo-v2-pro/mimo-v2-omnican return the user-visible final answer inreasoning_content, which OpenClaw currently surfaces asthinkinginstead of normal assistant text, so the reply can appear blank.Change Type (select all)
Scope (select all touched areas)
Linked Issue/PR
Root Cause / Regression History (if applicable)
reasoning_content. The shared parser maps that to athinkingblock, while downstream rendering expects user-visible output intextblocks.git blame, prior PR, issue, or refactor if known): the shared OpenAI-compatible parsing behavior is reasonable for providers that usereasoning_contentfor actual reasoning output, but Xiaomi MiMo uses it for the final visible answer in this case.openai-completionspath without a provider-specific normalization step.Regression Test Plan (if applicable)
extensions/xiaomi/index.test.tsmimo-v2-pro/mimo-v2-omniare normalized into text, while non-target Xiaomi models remain unchanged.wrapStreamFnseam without needing live Xiaomi API traffic.User-visible / Behavior Changes
mimo-v2-pro/mimo-v2-omnireplies no longer appear blank when the provider returns the final answer viareasoning_content.Diagram (if applicable)