Skip to content

[MooncakeStore] support batch api#428

Merged
stmatengss merged 2 commits intokvcache-ai:mainfrom
xinranwang17:support-batch-api
Jun 5, 2025
Merged

[MooncakeStore] support batch api#428
stmatengss merged 2 commits intokvcache-ai:mainfrom
xinranwang17:support-batch-api

Conversation

@xinranwang17
Copy link
Copy Markdown
Contributor

@xinranwang17 xinranwang17 commented May 30, 2025

This PR supports to put/get a batch of data in order to reduce master rpc call. It provides BatchPutStart, BatchPutEnd, BatchPutRevode and BatchGet APIs. I am still working on Python Store Batch API and will post it soon. Related RFC: #380

support to put/get a batch of data in order to reduce master rpc call.
Copy link
Copy Markdown
Collaborator

@xiaguan xiaguan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the interface is good! It's just in the implementation that I feel we might need to pay a bit more attention to the parallelism aspect.

const std::vector<std::string>& keys,
std::unordered_map<std::string, std::vector<Replica::Descriptor>>&
batch_replica_list) {
for (const auto& key : keys) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure putting the batch processing directly in the master is the best approach, as it might lose some parallelism.

I think a better option could be to put it in the master client instead. We could give each request its own task there, which would allow for better parallel execution. What do you think of that idea?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your comments! I will implement async concurrent gRPC call on master client side, will update PR later

ObjectInfo object_info;
object_info.replica_list = object_info_it->second;
object_info.error_code = ErrorCode::OK;
if (Get(key, object_info, slices_it->second) != ErrorCode::OK) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The same goes for the data transfer part – I think transferring objects one by one might also limit batch parallelism.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it makes sense. I'd like to improve the batch transfer in following PRs

Comment thread mooncake-store/src/client.cpp Outdated
@xiaguan
Copy link
Copy Markdown
Collaborator

xiaguan commented Jun 4, 2025

I think the interface looks good. We can probably improve the implementation step by step in future PRs. If you're okay with that, I'm happy to approve this one for now?

@xinranwang17
Copy link
Copy Markdown
Contributor Author

I think the interface looks good. We can probably improve the implementation step by step in future PRs. If you're okay with that, I'm happy to approve this one for now?

Sure, I will improve the implementation in the following PRs

@xinranwang17 xinranwang17 changed the title support batch api [Mooncake-Store] support batch api Jun 4, 2025
@xinranwang17 xinranwang17 changed the title [Mooncake-Store] support batch api [MooncakeStore] support batch api Jun 4, 2025
@xinranwang17 xinranwang17 requested a review from stmatengss June 4, 2025 06:02
@stmatengss stmatengss merged commit 4d0c85d into kvcache-ai:main Jun 5, 2025
10 checks passed
wanyue-wy pushed a commit to wanyue-wy/Mooncake that referenced this pull request Dec 14, 2025
* support batch api

support to put/get a batch of data in order to reduce master rpc call.

* fix typo
JasonZhang517 pushed a commit to JasonZhang517/Mooncake that referenced this pull request Feb 9, 2026
* support batch api

support to put/get a batch of data in order to reduce master rpc call.

* fix typo
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants