Problem
OpenClaw already forwards several OpenAI Responses controls like max_output_tokens, reasoning.effort, and service_tier, but it does not currently forward OpenAI's text.verbosity setting.
That makes it harder to use an important OpenAI-native control for answer length/style without resorting to prompt hacks.
Why this matters
text.verbosity is distinct from reasoning depth:
- models can think deeply internally
- but still return a short external answer
This is useful for people who want "think more, say less" behavior from OpenAI models.
Proposed support
Allow model params such as:
agents: {
defaults: {
models: {
"openai/gpt-5.4": {
params: {
textVerbosity: "low"
}
}
}
}
}
and forward that to OpenAI Responses payloads as:
{
"text": { "verbosity": "low" }
}
Prefer supporting both alias styles:
textVerbosity
text_verbosity
Scope
- OpenAI Responses payload shaping
- config/model params passthrough
- tests for payload injection / invalid values / precedence
I already have a patch prepared for this and will open a PR shortly.
Problem
OpenClaw already forwards several OpenAI Responses controls like
max_output_tokens,reasoning.effort, andservice_tier, but it does not currently forward OpenAI'stext.verbositysetting.That makes it harder to use an important OpenAI-native control for answer length/style without resorting to prompt hacks.
Why this matters
text.verbosityis distinct from reasoning depth:This is useful for people who want "think more, say less" behavior from OpenAI models.
Proposed support
Allow model params such as:
and forward that to OpenAI Responses payloads as:
{ "text": { "verbosity": "low" } }Prefer supporting both alias styles:
textVerbositytext_verbosityScope
I already have a patch prepared for this and will open a PR shortly.