Skip to content

refactor: typed reasoning content model#1395

Merged
joshua-mo-143 merged 4 commits into0xPlaygrounds:mainfrom
darinkishore:pr/reasoning-core-types
Feb 16, 2026
Merged

refactor: typed reasoning content model#1395
joshua-mo-143 merged 4 commits into0xPlaygrounds:mainfrom
darinkishore:pr/reasoning-core-types

Conversation

@darinkishore
Copy link
Copy Markdown
Contributor

Summary

  • Introduces ReasoningContent enum (Text, Encrypted, Redacted, Summary) replacing unstructured string-based reasoning representation
  • Adds Reasoning struct with id, content: Vec<ReasoningContent> and builder methods (new, with_id, new_with_signature, etc.)
  • Adds ReasoningDelta variant and MessageId variant to RawStreamingChoice for streaming infrastructure
  • Adds message_id: Option<String> to CompletionResponse and StreamingCompletionResponse for provider-assigned IDs (needed by OpenAI Responses API)
  • Removes panic paths in unsupported provider conversion paths, replacing with proper error propagation
  • Updates streaming agent loop to accumulate reasoning blocks and yield Reasoning/ReasoningDelta events

Closes #684

Breaking API Changes

Item Migration
AssistantContent::Reasoning now wraps Reasoning struct instead of raw fields Use Reasoning::new("text"), Reasoning::new_with_signature("text", "sig"), or match on reasoning.content
StreamedAssistantContent::Reasoning carries Reasoning instead of old shape Pattern match on Reasoning struct
StreamedAssistantContent::ReasoningDelta fields changed Now { reasoning: String, id: Option<String> }
CompletionResponse<T> has new message_id: Option<String> field Add message_id: None when constructing
RawStreamingChoice has new Reasoning, ReasoningDelta, MessageId variants Add arms to exhaustive matches

Test plan

  • cargo fmt clean
  • cargo clippy --all-targets --all-features clean
  • cargo test -p rig-core --lib — 271 tests pass
  • Existing openai_responses_input_item tests updated and passing

Replace panic!() with graceful handling when providers encounter
reasoning content they don't support (OpenAI Chat Completions,
Mistral, HuggingFace, Together). Previously these would crash
the process; now they skip unsupported reasoning blocks.

Part of 0xPlaygrounds#1147, 0xPlaygrounds#684
Replace the flat Reasoning struct with a discriminated content model:
- ReasoningContent::Text { text, signature } — signed thinking blocks
- ReasoningContent::Encrypted — opaque state for stateless replay
- ReasoningContent::Redacted — safety-flagged content (Anthropic)
- ReasoningContent::Summary — sanitized summaries (OpenAI, xAI)

Add message_id to CompletionResponse and streaming infrastructure
so providers can thread response IDs through multi-turn history.

Implements 0xPlaygrounds#1147, 0xPlaygrounds#684
@joshua-mo-143
Copy link
Copy Markdown
Contributor

Ah crap I think I broke something while merging main back in. I'll fix this up then we can get it merged

@joshua-mo-143
Copy link
Copy Markdown
Contributor

joshua-mo-143 commented Feb 16, 2026

Looks like the LLM hallucinated, message_id actually exists in #1396 so it's not technically in this PR.

Otherwise, lgtm so I'll be merging this

@joshua-mo-143 joshua-mo-143 added this pull request to the merge queue Feb 16, 2026
Merged via the queue into 0xPlaygrounds:main with commit 019b551 Feb 16, 2026
5 checks passed
@github-actions github-actions Bot mentioned this pull request Feb 16, 2026
Fromsko added a commit to Fromsko/rig that referenced this pull request Feb 25, 2026
…tible providers

The OpenAI Chat Completions streaming parser (StreamingDelta) does not
parse the 
easoning_content field from SSE chunks, causing reasoning/
thinking content from OpenAI-compatible providers to be silently dropped.

This affects all models that send 
easoning_content via the standard
Chat Completions streaming format, including:
- GLM-4.7 (Zhipu AI)
- DeepSeek models via OpenAI-compatible endpoints
- Qwen with thinking mode
- vLLM / Ollama OpenAI-compatible endpoints

The fix adds 
easoning_content: Option<String> to StreamingDelta and
yields RawStreamingChoice::ReasoningDelta events when present, matching
the existing pattern in the DeepSeek-specific provider.

Note: RawStreamingChoice::ReasoningDelta was already defined in
streaming.rs (added in PR 0xPlaygrounds#1395), but the generic OpenAI streaming
parser never utilized it.
github-merge-queue Bot pushed a commit that referenced this pull request Mar 4, 2026
…tible providers (#1441)

* fix(openai): add reasoning_content to StreamingDelta for OpenAI-compatible providers

The OpenAI Chat Completions streaming parser (StreamingDelta) does not
parse the 
easoning_content field from SSE chunks, causing reasoning/
thinking content from OpenAI-compatible providers to be silently dropped.

This affects all models that send 
easoning_content via the standard
Chat Completions streaming format, including:
- GLM-4.7 (Zhipu AI)
- DeepSeek models via OpenAI-compatible endpoints
- Qwen with thinking mode
- vLLM / Ollama OpenAI-compatible endpoints

The fix adds 
easoning_content: Option<String> to StreamingDelta and
yields RawStreamingChoice::ReasoningDelta events when present, matching
the existing pattern in the DeepSeek-specific provider.

Note: RawStreamingChoice::ReasoningDelta was already defined in
streaming.rs (added in PR #1395), but the generic OpenAI streaming
parser never utilized it.

* fix(openai): clarify purpose of reasoning_content in StreamingDelta

Added a comment to clarify that reasoning_content is not part of the official OpenAI API.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

refactor: reasoning should not be Vec<String>

2 participants