Skip to content

[Fix] fix the issue in deserialize worker#7

Closed
ApostaC wants to merge 1 commit intomainfrom
fix-patch-0625
Closed

[Fix] fix the issue in deserialize worker#7
ApostaC wants to merge 1 commit intomainfrom
fix-patch-0625

Conversation

@ApostaC
Copy link
Copy Markdown
Contributor

@ApostaC ApostaC commented Jun 25, 2024

No description provided.

@ApostaC ApostaC requested a review from YaoJiayi June 25, 2024 16:20
@ApostaC ApostaC closed this Jun 30, 2024
@ApostaC ApostaC deleted the fix-patch-0625 branch July 5, 2024 23:08
chenzhengda pushed a commit to chenzhengda/LMCache that referenced this pull request Mar 31, 2025
orozery added a commit to orozery/LMCache that referenced this pull request Apr 27, 2025
* redis lookup server: Fix empty batched_remove

This commit fixes a possible exception when trying to call batched_remove with an empty set of keys.

Signed-of-by: Or Ozeri <oro@il.ibm.com>

* p2p: disable distributed server

Currently, the p2p distributed server is broken in vllm v1.
This commit disables it for now.

Signed-off-by: Or Ozeri <oro@il.ibm.com>

---------

Signed-off-by: Or Ozeri <oro@il.ibm.com>
orozery pushed a commit to orozery/LMCache that referenced this pull request May 6, 2025
NumberWan pushed a commit to NumberWan/LMCache that referenced this pull request Aug 27, 2025
sheperdh added a commit to sheperdh/LMCache2 that referenced this pull request Jan 6, 2026
* [XWKV-48] Implement gismo backend (LMCache#7)

* [XWKV-48] Cache fd to optimize performance (LMCache#11)

Cache fd to optimize performance.

Add thread pool to read/write file in parallel.

* [XWKV-48] Support remotely read kv (LMCache#12)

As Gismo supports locally read files
written remotely, we need to change our code
to make sure we can read files not cached.

* [XWKV-68] Use vram API to boost performance (LMCache#13)

As vram read/write API to boost performance.

Use new get/put method to read/write meta file.

* [XWKV-68] Use batched contains API

Use mvfs batched contains API to implement
batch contains interface to
boost performance.

Add retry when get metadata in case lmcache
reads faster than backend.

---------

Co-authored-by: Jinwen <287310886@qq.com>
yoo-kumaneko pushed a commit to yoo-kumaneko/LMCache that referenced this pull request Mar 25, 2026
…LMCache multiprocess mode (#35931) (LMCache#7)

Signed-off-by: idellzheng <idellzheng@tencent.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants