Summary
Allow per-model configuration of the AWS Bedrock service_tier parameter (e.g., flex, priority, standard) via OpenClaw model params, similar to how params.serviceTier already works for OpenAI and Anthropic providers.
Problem to solve
AWS Bedrock supports four service tiers — Standard, Priority, Flex, and Reserved — that allow users to trade latency for cost savings. Flex tier offers a 50% discount on eligible models (e.g., Mistral Large 3: $0.50→$0.25/M input, $1.50→$0.75/M output). Currently, OpenClaw has no way to pass this parameter to Bedrock, making it impossible to use cost-optimized flex tier from within OpenClaw.
Use case: Running Mistral Large 3 on flex tier via Bedrock for non-time-sensitive tasks (summarization, batch analysis, agentic workflows) at half the on-demand cost.
API reference: The serviceTier field goes in the top-level of the Bedrock Converse API request body: {"serviceTier": {"type": "flex"}}. It is a top-level request field, not an inferenceConfig sub-field.
Proposed solution
Extend the existing params.serviceTier mechanism (already used by OpenAI and Anthropic) to also forward to Bedrock when configured under models.providers.amazon-bedrock or per-model params:
{
"models": {
"providers": {
"amazon-bedrock": {
"models": [
{
"id": "mistral.mistral-large-3-675b-instruct",
"params": {
"serviceTier": "flex"
}
}
]
}
}
}
}
Alternatives considered
This could also be done by adding support for Bedrock's addionalModelRequestFields -- This would provide maximum flexibility in passing parameters to the the Converse API. The serviceTier parameter is already used by other model providers and consistency is more important than flexibility here.
Impact
- Affected: All Bedrock users that can tolerate flex tier's longer processing times
- Model(s) affected: Any Bedrock model that supports flex/priority tier — confirmed examples include Mistral Large 3, DeepSeek v3.2, DeepSeek v3.1, Qwen3 variants, Amazon Nova models.
- Severity: Medium (cost savings for workloads that can tolerate flex tier's longer processing times)
- Frequency: Every interaction with the model
- Consequence: higher model usage costs
Evidence/examples
params.serviceTier already works for OpenAI and Anthropic providers.
Additional information
• API detail: {"serviceTier": {"type": "flex"}} goes at top-level request body (not inside inferenceConfig)
Summary
Allow per-model configuration of the AWS Bedrock service_tier parameter (e.g., flex, priority, standard) via OpenClaw model params, similar to how params.serviceTier already works for OpenAI and Anthropic providers.
Problem to solve
AWS Bedrock supports four service tiers — Standard, Priority, Flex, and Reserved — that allow users to trade latency for cost savings. Flex tier offers a 50% discount on eligible models (e.g., Mistral Large 3: $0.50→$0.25/M input, $1.50→$0.75/M output). Currently, OpenClaw has no way to pass this parameter to Bedrock, making it impossible to use cost-optimized flex tier from within OpenClaw.
Use case: Running Mistral Large 3 on flex tier via Bedrock for non-time-sensitive tasks (summarization, batch analysis, agentic workflows) at half the on-demand cost.
API reference: The serviceTier field goes in the top-level of the Bedrock Converse API request body: {"serviceTier": {"type": "flex"}}. It is a top-level request field, not an inferenceConfig sub-field.
Proposed solution
Extend the existing params.serviceTier mechanism (already used by OpenAI and Anthropic) to also forward to Bedrock when configured under models.providers.amazon-bedrock or per-model params:
{ "models": { "providers": { "amazon-bedrock": { "models": [ { "id": "mistral.mistral-large-3-675b-instruct", "params": { "serviceTier": "flex" } } ] } } } }Alternatives considered
This could also be done by adding support for Bedrock's
addionalModelRequestFields-- This would provide maximum flexibility in passing parameters to the the Converse API. TheserviceTierparameter is already used by other model providers and consistency is more important than flexibility here.Impact
Evidence/examples
params.serviceTier already works for OpenAI and Anthropic providers.
Additional information
• API detail: {"serviceTier": {"type": "flex"}} goes at top-level request body (not inside inferenceConfig)