Skip to content

[Fix] fix decode OOM due to wrong estimation when ignore_eos#7328

Closed
DarkSharpness wants to merge 2 commits intosgl-project:mainfrom
DarkSharpness:dev_fix_decode_oom
Closed

[Fix] fix decode OOM due to wrong estimation when ignore_eos#7328
DarkSharpness wants to merge 2 commits intosgl-project:mainfrom
DarkSharpness:dev_fix_decode_oom

Conversation

@DarkSharpness
Copy link
Copy Markdown
Collaborator

Motivation

When running benchmark with page_size > 1 (in our case, page_size = 32), we frequently ran into "decode OOM", especially when hierarchical cache is enabled. As part of #7194 (see this commit), we fix the problem.

I'm not 100% sure this is the root cause of decode OOM, but it seems that the following logic must be incorrect and this PR fixes the decode OOM in my case.

        if running_batch is not None:
            self.rem_total_token_offset += sum(
                [
                    min(
                        (r.sampling_params.max_new_tokens - len(r.output_ids)),
                        CLIP_MAX_NEW_TOKENS_ESTIMATION,
                    )
                    * self.new_token_ratio
                    for r in running_batch.reqs
                ]
            )

When the request has ignore_eos, we shouldn't multiply the new_token_ratio, since its output length is fixed.

Modifications

ix the bug and add a safer wrapper for the request.

Checklist

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @DarkSharpness, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request provides a crucial fix for a 'decode OOM' issue encountered during model inference, specifically when ignore_eos is enabled. The problem stemmed from an inaccurate estimation of token requirements by the scheduler. The solution involves refactoring the token estimation logic into a dedicated method on the request object, ensuring correct calculation based on whether ignore_eos is active, and centralizing a key configuration constant. This change aims to improve the stability and efficiency of the token scheduling process.

Highlights

  • Fix for Decode OOM: This pull request addresses and fixes a critical 'decode OOM' (Out Of Memory) issue that was occurring during benchmark runs, particularly when page_size was greater than 1 and hierarchical caching was enabled. The root cause was identified as an incorrect estimation of remaining tokens for requests where ignore_eos (ignore end-of-sequence) was set, leading to memory exhaustion.
  • Introduced estimated_max_new_tokens Method: A new helper method, estimated_max_new_tokens, has been added to the Req class. This method centralizes and encapsulates the logic for calculating the estimated maximum new tokens a request can generate, ensuring that the new_token_ratio and CLIP_MAX_NEW_TOKENS_ESTIMATION are correctly applied only when ignore_eos is false, thus preventing over-estimation for fixed-length outputs.
  • Centralized CLIP_MAX_NEW_TOKENS_ESTIMATION: The CLIP_MAX_NEW_TOKENS_ESTIMATION constant, which is used to limit the estimation of maximum new tokens, has been moved from schedule_policy.py to schedule_batch.py. It is now also configurable via the SGLANG_CLIP_MAX_NEW_TOKENS_ESTIMATION environment variable, allowing for more flexible tuning.
  • Refactored Token Estimation Usage: All instances within schedule_policy.py where token estimation was previously calculated manually (e.g., in the __init__ method, add_chunked_req, add_req_state, and add_one_req) have been updated to utilize the new req.estimated_max_new_tokens() helper method. This refactoring improves code clarity, reduces duplication, and ensures consistent and correct token estimation across the scheduler.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a high-severity decode OOM issue caused by an incorrect estimation of remaining tokens when the ignore_eos parameter is enabled. The fix introduces a new method estimated_max_new_tokens in the Req class to centralize and correct this logic. The code in schedule_policy.py is refactored to utilize this new method, improving code maintainability and directly resolving the OOM problem. The changes adhere to common Python practices (like PEP 8).

Comment thread python/sglang/srt/managers/schedule_batch.py
Comment thread python/sglang/srt/managers/schedule_policy.py
@xiezhq-hermann xiezhq-hermann self-assigned this Jun 18, 2025
@hnyls2002 hnyls2002 self-assigned this Jun 19, 2025
@DarkSharpness
Copy link
Copy Markdown
Collaborator Author

We found one possible root cause in #7410.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants