Skip to content

minilb support roundrobin#15824

Merged
iforgetmyname merged 1 commit intosgl-project:ifmn/eagle-dp-attnfrom
Hexq0210:hexq_minilb
Dec 25, 2025
Merged

minilb support roundrobin#15824
iforgetmyname merged 1 commit intosgl-project:ifmn/eagle-dp-attnfrom
Hexq0210:hexq_minilb

Conversation

@Hexq0210
Copy link
Copy Markdown
Contributor

Motivation

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @Hexq0210, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the minilb component by integrating flexible round-robin load balancing capabilities. It provides operators with granular control over how requests are distributed among available prefill and decode servers, as well as how bootstrap room identifiers are assigned, through the use of environment variables. This change aims to improve request distribution efficiency and predictability within the system.

Highlights

  • Round-Robin Instance Selection: Implemented a round-robin strategy for selecting prefill and decode servers, which can be enabled by setting the SGLANG_INSTANCE_ROUND_ROBIN environment variable. When disabled, server selection reverts to random.
  • Dynamic Bootstrap Room Generation: Introduced a new mechanism for generating bootstrap_room values that supports a per-prefill-server round-robin counter, controlled by the SGLANG_DP_ROUND_ROBIN environment variable. This allows for more controlled distribution of requests at a deeper level.
  • Code Refactoring: The logic for generating bootstrap_room has been encapsulated into a new next_round_robin_num method within the MiniLB class, and the standalone _generate_bootstrap_room function has been removed.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds round-robin load balancing capabilities to the mini_lb, controlled by environment variables. The implementation is a good step forward. My review focuses on improving code efficiency, clarity, and adherence to best practices. I've identified some redundant code, inefficient environment variable lookups within hot paths, and opportunities to use more idiomatic Python. Addressing these points will make the code more robust and maintainable.

Comment on lines +78 to +90
def next_round_robin_num(self, prefill_server):
is_dp_round_robin = os.getenv("SGLANG_DP_ROUND_ROBIN", "0") == "1"
if is_dp_round_robin:
self.dp_attention_round_robin_size_dict[prefill_server] = (
self.dp_attention_round_robin_size_dict[prefill_server] + 1
)
bootstrap_room = self.dp_attention_round_robin_size_dict[prefill_server]
else:
bootstrap_room = random.randint(0, 2**63 - 1)
if lb.enable_trace:
trace_req_start(bootstrap_room, bootstrap_room, role="router")
trace_slice_start("mini_lb_launch", bootstrap_room)
return bootstrap_room
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This method can be improved in several ways:

  • Inefficient Environment Variable Access: os.getenv("SGLANG_DP_ROUND_ROBIN", "0") is called on every invocation. For better performance, this check should be performed once in the __init__ method and the result stored in an instance attribute (e.g., self.is_dp_round_robin).
  • Use of Global Variable: The code uses the global variable lb to access enable_trace. It's better practice to use self.enable_trace for improved encapsulation and to avoid reliance on global state.
  • Simplified Increment: The increment operation can be written more concisely using the += operator.

The following suggestion addresses the use of the global variable and simplifies the increment. Please also consider moving the environment variable check to __init__.

    def next_round_robin_num(self, prefill_server):
        is_dp_round_robin = os.getenv("SGLANG_DP_ROUND_ROBIN", "0") == "1"
        if is_dp_round_robin:
            self.dp_attention_round_robin_size_dict[prefill_server] += 1
            bootstrap_room = self.dp_attention_round_robin_size_dict[prefill_server]
        else:
            bootstrap_room = random.randint(0, 2**63 - 1)
            if self.enable_trace:
                trace_req_start(bootstrap_room, bootstrap_room, role="router")
                trace_slice_start("mini_lb_launch", bootstrap_room)
        return bootstrap_room

Comment on lines +121 to +128
is_instance_round_robin = os.getenv("SGLANG_INSTANCE_ROUND_ROBIN", "0") == "1"
if is_instance_round_robin:
pidx = self.req_nums % len(self.prefill_urls)
didx = self.req_nums % len(self.decode_urls)
self.req_nums = self.req_nums + 1
else:
pidx = random.randint(0, len(self.prefill_urls) - 1)
didx = random.randint(0, len(self.decode_urls) - 1)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This logic can be improved:

  • Inefficient environment variable access: os.getenv("SGLANG_INSTANCE_ROUND_ROBIN", "0") is called on every request. This should be read once in __init__ and stored as an instance attribute for efficiency.
  • Simplified increment: self.req_nums = self.req_nums + 1 can be written more concisely as self.req_nums += 1.

The following suggestion simplifies the increment. Please also consider moving the environment variable check to __init__.

        is_instance_round_robin = os.getenv("SGLANG_INSTANCE_ROUND_ROBIN", "0") == "1"
        if is_instance_round_robin:
            pidx = self.req_nums % len(self.prefill_urls)
            didx = self.req_nums % len(self.decode_urls)
            self.req_nums += 1
        else:
            pidx = random.randint(0, len(self.prefill_urls) - 1)
            didx = random.randint(0, len(self.decode_urls) - 1)

Comment on lines +67 to +70
self.dp_attention_round_robin_size_dict = dict.fromkeys(self.prefill_urls, 0)
self.dp_attention_round_robin_size_dict = {
url: i * 20000 for i, url in enumerate(self.prefill_urls)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This initialization can be improved in two ways:

  1. Redundant Code: The assignment on line 67 is immediately overwritten by the dictionary comprehension on lines 68-70, making it unnecessary.
  2. Magic Number: The value 20000 on line 69 is a "magic number," making the code harder to understand. It should be defined as a named constant at the module level (e.g., DP_ATTENTION_ROUND_ROBIN_OFFSET = 20000) to improve readability and maintainability.
Suggested change
self.dp_attention_round_robin_size_dict = dict.fromkeys(self.prefill_urls, 0)
self.dp_attention_round_robin_size_dict = {
url: i * 20000 for i, url in enumerate(self.prefill_urls)
}
# The constant 20000 should be defined at the module level for clarity.
self.dp_attention_round_robin_size_dict = {
url: i * 20000 for i, url in enumerate(self.prefill_urls)
}

@iforgetmyname iforgetmyname merged commit 7cc1c89 into sgl-project:ifmn/eagle-dp-attn Dec 25, 2025
1 check passed
Liwansi added a commit to iforgetmyname/sglang that referenced this pull request Dec 29, 2025
…glang into eagle-sche

* 'ifmn/eagle-dp-attn' of https://github.com/sgl-project/sglang: (22 commits)
  dp scheduler enhance support with chunked prefill (sgl-project#16071)
  modify suffix decoding
  CI dependency update (sgl-project#16063)
  fix rotary_embedding init npu (sgl-project#16011)
  feat: bugfix and accuracy fix for stablelm2_1_6b (sgl-project#15932)
  Update model and feature support for Ascend NPU (sgl-project#16005)
  Bugfix for Llama4 (sgl-project#15929)
  Bugfix for ds-vl2 (sgl-project#15894)
  gme qwen vl runners fix (sgl-project#15899)
  add profiling in scheduler (sgl-project#15876)
  llama use triton rope op (sgl-project#15855)
  suffix decoding adapt npu
  suffix decoding adapt npu
  Add suffix decoding speculative algorithm from feature 13553
  cherry sgl-project#15434: qwen3 vl performance update
  cherry sgl-project#15597: fix Qwen3-VL-30B-A3B-Instruct accuracy loss
  [Schedule] bug fix for schedule enhancer (sgl-project#15834)
  minilb support roundrobin (sgl-project#15824)
  fix torchair compile issue
  cherry sgl-project#15187: lora fix
  ...

# Conflicts:
#	python/sglang/srt/managers/scheduler.py
#	python/sglang/srt/managers/scheduler_enhancer.py
JiaruiChang5268 pushed a commit to JiaruiChang5268/sglang that referenced this pull request Jan 13, 2026
JiaruiChang5268 pushed a commit to JiaruiChang5268/sglang that referenced this pull request Jan 13, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants