Welcome aboard!!
We’re building a vibrant open-source community and your ideas and code are what make it thrive.
To help you get started, we will continually update this page with a collection of bite-sized “quick start” tasks—dive in and let’s create something amazing together!
Good First Issues:
Documentation: good documentation is what makes LMCache easy to use and adoptable!
Bugs:
Testing + CI/CD
See #933 for how to run the unit tests
Performance / Profiling / Workloads
If LMCache is not performant, profiling (https://docs.vllm.ai/en/v0.9.1/contributing/profiling.html) and detailed description of deployments and workload would be great!
Features / Improvements
RFCs (Discussions on the future directions of LMCache and how to make progress)
Older Issues
More tasks are coming.
Welcome aboard!!
We’re building a vibrant open-source community and your ideas and code are what make it thrive.
To help you get started, we will continually update this page with a collection of bite-sized “quick start” tasks—dive in and let’s create something amazing together!
Good First Issues:
Documentation: good documentation is what makes LMCache easy to use and adoptable!
- changed to general purpose lmcache request configs insite of kv_transfer_params: [refact] use request_configs replace tags #1377
- how to configure: https://github.com/LMCache/LMCache/pull/1387/files
- thread stack trace observability Support show thread info within cache engine internal api server #1358
- dynamic lmcache log level adjustment Support Get or set log level via internal api server #1359
- LRU
- S3 FIFO [Feature] Add S3FIFO cache policy #1341
- Workaround for issue #1346 #1356
Bugs:
- [random]1P1D ,the decoder will get stuck #1258
- prefiller non-stopping printing "Failed to allocate memory object, retrying..." and request stucked #1337
- [Bug][xPyD][lmcache0.3.3+vllm0.10.0] "failed to allocate memory for tensor" during benchmark with lmcache xPyD version #1339
- [bug] Layerwise mode retrive error #1150
Testing + CI/CD
See #933 for how to run the unit tests
- [BUG] Python packages missing for unit tests #696
Performance / Profiling / Workloads
If LMCache is not performant, profiling (https://docs.vllm.ai/en/v0.9.1/contributing/profiling.html) and detailed description of deployments and workload would be great!
Features / Improvements
- [Feature request] CacheBlend support for DeepSeek models #1082 (DeepSeek / MLA)
- CacheBlend for Qwen3 #1121 (Qwen 3)
- online blend (not just offline) Blend v1 for online serving and benchmark #1136
- online blend and more models vllm model for vllm-instance not found #1405
RFCs (Discussions on the future directions of LMCache and how to make progress)
- Add support for LMStudio and Ollama Add support for lmstudio and ollama #923
- dynamci configuration [Controller] Dynamic update config through controller #1265 (comment))
Older Issues
More tasks are coming.