Skip to content

Refactor: Align pd_buffer_size to chunk size in PD backend#2694

Merged
deng451e merged 28 commits intoLMCache:devfrom
hlin99:ww10_PR_pd_buffer_size
Apr 4, 2026
Merged

Refactor: Align pd_buffer_size to chunk size in PD backend#2694
deng451e merged 28 commits intoLMCache:devfrom
hlin99:ww10_PR_pd_buffer_size

Conversation

@hlin99
Copy link
Copy Markdown
Contributor

@hlin99 hlin99 commented Mar 5, 2026

  • Add buffer size alignment logic to prevent assertion error
  • Calculate aligned_buffer_size as (origin_size // chunk_size) * chunk_size
  • Add informative logging when buffer size is adjusted
  • Release excess buffer memory that can't be aligned
  • Follows the same pattern as local_cpu_backend.py

Prior to this change, reusing a single lmcache.yaml across multiple models was impossible due to strict chunk_size alignment requirements for pd_buffer_size, which vary by model KV size. Configuring this manually was difficult and often required a dry run.

With this update, pd_buffer_size acts as a maximum allocation ceiling. The system now automatically aligns the size and frees the remaining unaligned memory, ensuring robust auto-alignment regardless of user configuration.


Note

Medium Risk
Changes PD disaggregation memory allocation semantics by rounding pd_buffer_size down to a model-derived chunk multiple and erroring when too small, which could affect allocated capacity and runtime behavior for existing deployments.

Overview
PD backend now treats pd_buffer_size as a ceiling and auto-aligns it to the KV chunk size. During allocator initialization, the requested buffer is rounded down to the nearest multiple of the per-chunk byte size, logs when truncation occurs, and raises a clear error if the configured size is smaller than a single chunk.

Docs for disaggregated prefill are updated to reflect the new upper-bound/aligned meaning of pd_buffer_size, and a unit test is added to verify alignment behavior and allocator calls.

Reviewed by Cursor Bugbot for commit a1ffc5a. Bugbot is set up for automated code reviews on this repo. Configure here.

- Add buffer size alignment logic to prevent assertion error
- Calculate aligned_buffer_size as (origin_size // chunk_size) * chunk_size
- Add informative logging when buffer size is adjusted
- Release excess buffer memory that can't be aligned
- Follows the same pattern as local_cpu_backend.py

Signed-off-by: Tony Lin <tony.lin@intel.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the PD backend to automatically align the pd_buffer_size with the memory chunk size. This change is crucial for preventing runtime assertion errors related to unaligned buffer sizes, ensuring more robust memory allocation. It also provides clear logging for transparency and includes comprehensive unit tests to validate the new alignment mechanism.

Highlights

  • Buffer Size Alignment: Implemented logic within the PD backend to automatically align the pd_buffer_size to be a multiple of the calculated memory chunk size, which prevents potential assertion errors during memory allocation.
  • Informative Logging: Added detailed logging messages that report the original and aligned buffer sizes, the chunk size, and the amount of excess memory released when an adjustment occurs.
  • Allocator Initialization Update: Modified the init_func call to utilize the newly calculated aligned_buffer_size and explicitly pass shapes and dtypes for memory allocation.
  • New Unit Tests: Introduced a new test file, test_pd_backend_buffer_alignment.py, containing dedicated unit tests to thoroughly verify the correctness of the buffer size alignment logic under various scenarios.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • lmcache/v1/storage_backend/pd_backend.py
    • Imported the get_size_bytes utility function.
    • Added logic to calculate chunk_size_bytes and derive aligned_buffer_size from origin_buffer_size.
    • Included an if block to log buffer size adjustments, showing original, aligned, and chunk sizes, and released excess memory.
    • Updated the init_func call to use aligned_buffer_size, shapes, and dtypes.
  • tests/v1/storage_backend/test_pd_backend_buffer_alignment.py
    • Added a new test file to specifically test buffer size alignment in the PD backend.
    • Included create_test_metadata helper function for consistent metadata creation.
    • Implemented test_buffer_size_alignment_cpu to verify that unaligned buffer sizes are correctly adjusted and do not cause assertion errors.
    • Added test_buffer_size_already_aligned to confirm that no adjustments are made when the provided buffer size is already a multiple of the chunk size.
Activity
  • Unit tests were added to cover the new buffer alignment logic, ensuring its correctness and preventing regressions.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the pd_backend to align the pd_buffer_size with the chunk size, which is a good improvement to prevent potential assertion errors. The added logging for buffer size adjustments is also helpful for debugging. The new unit tests effectively cover the alignment logic.

My review includes a few suggestions for improvement:

  • A critical edge case where the buffer size could be aligned to zero is not handled. I've suggested adding a check to raise an error in this scenario.
  • A point on code style regarding the placement of an import.
  • Suggestions to improve the new tests by refactoring duplicated code and adding a test case for the edge case mentioned above.

Comment thread lmcache/v1/storage_backend/pd_backend.py
Comment thread lmcache/v1/storage_backend/pd_backend.py Outdated
Comment thread tests/v1/storage_backend/test_pd_backend_buffer_alignment.py Outdated
Comment thread tests/v1/storage_backend/test_pd_backend_buffer_alignment.py Outdated
Copy link
Copy Markdown
Collaborator

@DongDongJu DongDongJu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

generally LGTM,
Please change the comment and add at least one test that asserts the exact aligned size, not just divisibility.
Other test looks doesn't needed.

Comment thread lmcache/v1/storage_backend/pd_backend.py Outdated
hlin99 added 2 commits March 10, 2026 01:31
Signed-off-by: Tony Lin <tony.lin@intel.com>
Signed-off-by: Tony Lin <tony.lin@intel.com>
@hlin99
Copy link
Copy Markdown
Contributor Author

hlin99 commented Mar 10, 2026

generally LGTM, Please change the comment and add at least one test that asserts the exact aligned size, not just divisibility. Other test looks doesn't needed.

hi @DongDongJu thanks for your comments. I accepted all of them. submitted two new commits to address the log msg issue and UT issue per your advice, which is very helpful! thanks.

@hlin99
Copy link
Copy Markdown
Contributor Author

hlin99 commented Mar 12, 2026

generally LGTM, Please change the comment and add at least one test that asserts the exact aligned size, not just divisibility. Other test looks doesn't needed.

hi @DongDongJu thanks for your comments. I accepted all of them. submitted two new commits to address the log msg issue and UT issue per your advice, which is very helpful! thanks.

Hi @DongDongJu , I removed the UT because it was inexplicably causing the CI pipeline to fail(seems the underneath PD backend is not closed gracefully). Given how straightforward the underlying function is, we can safely remove the test to restore CI stability without compromising code quality. Let me know if you have any concern. thanks.

hlin99 added 2 commits March 13, 2026 04:32
Signed-off-by: Tony Lin <tony.lin@intel.com>
@hlin99
Copy link
Copy Markdown
Contributor Author

hlin99 commented Mar 13, 2026

generally LGTM, Please change the comment and add at least one test that asserts the exact aligned size, not just divisibility. Other test looks doesn't needed.

hi @DongDongJu thanks for your comments. I accepted all of them. submitted two new commits to address the log msg issue and UT issue per your advice, which is very helpful! thanks.

Hi @DongDongJu , I removed the UT because it was inexplicably causing the CI pipeline to fail(seems the underneath PD backend is not closed gracefully). Given how straightforward the underlying function is, we can safely remove the test to restore CI stability without compromising code quality. Let me know if you have any concern. thanks.

UT issue has been resolved and restored. @DongDongJu

@hlin99 hlin99 requested a review from DongDongJu March 13, 2026 06:05
@hlin99
Copy link
Copy Markdown
Contributor Author

hlin99 commented Mar 23, 2026

@hlin99 Hello Tony, what happened if we just set save_unfull_chunk to false in this case? is this problem still existing?

hi @DongDongJu the answer is yes. the problem is still there regardless the setting of save_unfull_chunk. current logic is to mandate buffer size to be "chunk_size x kv_size" aligned, which is technically unnecessary. each time, when i change a model or chunk_size, i need to adjust the buffer size value and it's annoying as I have to do calculations before the change.

from ease of use point, we can do alignment and only allocate the aligned memory part with making sure not exceeding the buffer size limites, as this PR does.

@DongDongJu
Copy link
Copy Markdown
Collaborator

@hlin99 Hello Tony, what happened if we just set save_unfull_chunk to false in this case? is this problem still existing?

hi @DongDongJu the answer is yes. the problem is still there regardless the setting of save_unfull_chunk. current logic is to mandate buffer size to be "chunk_size x kv_size" aligned, which is technically unnecessary. each time, when i change a model or chunk_size, i need to adjust the buffer size value and it's annoying as I have to do calculations before the change.

from ease of use point, we can do alignment and only allocate the aligned memory part with making sure not exceeding the buffer size limites, as this PR does.

Yes I just realized that we are require save_unfull_chunk=true for pd backend. Let me check few more code after coming back to desk.

@hlin99
Copy link
Copy Markdown
Contributor Author

hlin99 commented Mar 24, 2026

@hlin99 Hello Tony, what happened if we just set save_unfull_chunk to false in this case? is this problem still existing?

hi @DongDongJu the answer is yes. the problem is still there regardless the setting of save_unfull_chunk. current logic is to mandate buffer size to be "chunk_size x kv_size" aligned, which is technically unnecessary. each time, when i change a model or chunk_size, i need to adjust the buffer size value and it's annoying as I have to do calculations before the change.
from ease of use point, we can do alignment and only allocate the aligned memory part with making sure not exceeding the buffer size limites, as this PR does.

Yes I just realized that we are require save_unfull_chunk=true for pd backend. Let me check few more code after coming back to desk.

@hlin99 Hello Tony, what happened if we just set save_unfull_chunk to false in this case? is this problem still existing?

hi @DongDongJu the answer is yes. the problem is still there regardless the setting of save_unfull_chunk. current logic is to mandate buffer size to be "chunk_size x kv_size" aligned, which is technically unnecessary. each time, when i change a model or chunk_size, i need to adjust the buffer size value and it's annoying as I have to do calculations before the change.
from ease of use point, we can do alignment and only allocate the aligned memory part with making sure not exceeding the buffer size limites, as this PR does.

Yes I just realized that we are require save_unfull_chunk=true for pd backend. Let me check few more code after coming back to desk.

sure. thanks.

@hlin99
Copy link
Copy Markdown
Contributor Author

hlin99 commented Apr 1, 2026

hi @DongDongJu do you have concerns anymore? thanks.

Copy link
Copy Markdown
Collaborator

@DongDongJu DongDongJu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thansk for the work!.
Please address the comments.

Comment thread lmcache/v1/storage_backend/pd_backend.py
Comment thread tests/v1/storage_backend/test_pd_backend_buffer_alignment.py Outdated
Comment thread tests/v1/storage_backend/test_pd_backend_buffer_alignment.py
hlin99 added 2 commits April 3, 2026 04:42
Signed-off-by: Tony Lin <tony.lin@intel.com>
Signed-off-by: Tony Lin <tony.lin@intel.com>
Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

There are 2 total unresolved issues (including 1 from previous review).

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Comment thread lmcache/v1/storage_backend/pd_backend.py
@hlin99 hlin99 requested a review from DongDongJu April 3, 2026 04:56
@hlin99
Copy link
Copy Markdown
Contributor Author

hlin99 commented Apr 3, 2026

Thansk for the work!. Please address the comments.

hi @DongDongJu all addressed. please check. thx.

Copy link
Copy Markdown
Collaborator

@DongDongJu DongDongJu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Copy Markdown
Collaborator

@deng451e deng451e left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@deng451e deng451e added the full Run comprehensive tests on this PR label Apr 4, 2026
@deng451e deng451e enabled auto-merge (squash) April 4, 2026 00:42
@deng451e deng451e merged commit 2108248 into LMCache:dev Apr 4, 2026
36 checks passed
@hlin99 hlin99 deleted the ww10_PR_pd_buffer_size branch April 25, 2026 05:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

full Run comprehensive tests on this PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants