fix(bedrock): encode model arns for OpenAI compatible bedrock imported models#21701
Merged
Sameerlite merged 1 commit intoBerriAI:mainfrom Feb 23, 2026
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Contributor
Greptile SummaryThis PR fixes a bug where Bedrock OpenAI-compatible imported models using ARNs (e.g.,
Confidence Score: 4/5
|
| Filename | Overview |
|---|---|
| litellm/llms/bedrock/chat/invoke_transformations/amazon_openai_transformation.py | Adds ARN encoding for model IDs using CommonUtils.encode_bedrock_runtime_modelid_arn before constructing the invoke URL. The encoding correctly converts slashes in ARN resource types (e.g., :imported-model/ to :imported-model%2F) to prevent them from being treated as path separators. Import placed at module level per project conventions. |
| tests/llm_translation/test_bedrock_completion.py | Updates existing test assertion to expect URL-encoded slash (%2F) in the imported-model ARN, matching the new encoding behavior. Test validates the fix is working correctly. |
Sequence Diagram
sequenceDiagram
participant Client
participant LiteLLM as LiteLLM (AmazonBedrockOpenAIConfig)
participant Encoder as CommonUtils.encode_bedrock_runtime_modelid_arn
participant Bedrock as AWS Bedrock Runtime
Client->>LiteLLM: completion(model="bedrock/openai/arn:...:imported-model/abc123")
LiteLLM->>LiteLLM: _get_openai_model_id() strips prefixes
Note right of LiteLLM: model_id = "arn:...:imported-model/abc123"
LiteLLM->>Encoder: encode_bedrock_runtime_modelid_arn(model_id)
Encoder-->>LiteLLM: "arn:...:imported-model%2Fabc123"
LiteLLM->>LiteLLM: Build URL: /model/{encoded_model_id}/invoke
LiteLLM->>Bedrock: POST /model/arn:...:imported-model%2Fabc123/invoke
Bedrock-->>Client: Response
Last reviewed commit: 4885b36
96a3554 to
4885b36
Compare
Contributor
Author
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Relevant issues
Fixes error in #17097 when OpenAI compatible Bedrock imported models were introduced.
Pre-Submission checklist
Please complete all items before asking a LiteLLM maintainer to review your PR
tests/litellm/directory, Adding at least 1 test is a hard requirement - see detailsmake test-unit@greptileaiand received a Confidence Score of at least 4/5 before requesting a maintainer reviewCI (LiteLLM team)
Branch creation CI run
Link:
CI run for the last commit
Link:
Merge / cherry-pick CI run
Links:
Type
🐛 Bug Fix
Changes
This PR fixes a bug in how Bedrock model ARNs are sent when using the
bedrock/openai/provider route. Model IDs that are ARNs (e.g. arn:aws:bedrock:...:imported-model/xyz123xyz123) must be URL-encoded when used in the request path; otherwise the AWS API misparses the path and returns anUnknownOperationExceptionerror.This only affects Bedrock models that use the OpenAI-compatible Runtime API (e.g. imported Qwen2.5, Qwen2-VL, Qwen2.5-VL, and GPT-OSS). For models that are invoked through the Converse API, the model arns are automatically encoded in converse_handler.py; and the Runtime invoke path did not, so ARNs containing / (e.g. :imported-model/...) were sent unencoded and triggered the error.