Skip to content

[Bugfix] fix the concurrency bug during async put#13

Merged
ApostaC merged 7 commits intoLMCache:mainfrom
ApostaC:dev/async-put-bugfix
Jul 5, 2024
Merged

[Bugfix] fix the concurrency bug during async put#13
ApostaC merged 7 commits intoLMCache:mainfrom
ApostaC:dev/async-put-bugfix

Conversation

@ApostaC
Copy link
Copy Markdown
Contributor

@ApostaC ApostaC commented Jul 5, 2024

No description provided.

@ApostaC ApostaC requested a review from YaoJiayi July 5, 2024 02:01
@ApostaC ApostaC merged commit 34ead53 into LMCache:main Jul 5, 2024
guymguym pushed a commit to guymguym/LMCache that referenced this pull request Jun 11, 2025
[Bugfix] Fix incorrect single-token saves in v1 (LMCache#653)
KevinCheung2259 pushed a commit to KevinCheung2259/LMCache that referenced this pull request Nov 5, 2025
* [bugfix] fix the concurrency bug in non-blocking put

---------

Co-authored-by: ApostaC <jc4xvyp@outlook.com>
sheperdh added a commit to sheperdh/LMCache2 that referenced this pull request Jan 6, 2026
* [XWKV-48] Implement gismo backend (LMCache#7)

* [XWKV-48] Cache fd to optimize performance (LMCache#11)

Cache fd to optimize performance.

Add thread pool to read/write file in parallel.

* [XWKV-48] Support remotely read kv (LMCache#12)

As Gismo supports locally read files
written remotely, we need to change our code
to make sure we can read files not cached.

* [XWKV-68] Use vram API to boost performance (LMCache#13)

As vram read/write API to boost performance.

Use new get/put method to read/write meta file.

* [XWKV-68] Use batched contains API

Use mvfs batched contains API to implement
batch contains interface to
boost performance.

Add retry when get metadata in case lmcache
reads faster than backend.

---------

Co-authored-by: Jinwen <287310886@qq.com>
DongDongJu pushed a commit to DongDongJu/LMCache that referenced this pull request Feb 22, 2026
* [bugfix] fix the concurrency bug in non-blocking put

---------

Co-authored-by: ApostaC <jc4xvyp@outlook.com>
yoo-kumaneko added a commit to yoo-kumaneko/LMCache that referenced this pull request Apr 13, 2026
…pter (LMCache#13)

Add sync_mode parameter to LMCacheMPWorkerAdapter so that sync-mode
specific behavior (QUERY_PREFETCH_STATUS_WITH_REQ_ID polling and
blocking retrieve waits) only activates when explicitly enabled.
This prevents the default async path from being affected.

- Add sync_mode: bool = False to LMCacheMPWorkerAdapter.__init__
- Gate prefetch status polling in submit_retrieve_request()
- Gate blocking wait in batched_submit_retrieve_requests()
- Pass sync_mode through create_worker_adapter() and connector init
- Rename LMCacheMPConnector to LMCacheMPConnectorDynamic

Signed-off-by: crclq2018@gmail.com

Signed-off-by: rigginschen <rigginschen@tencent.com>
Co-authored-by: rigginschen <rigginschen@tencent.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant