In-memory Semantic caching for LLMs written in Rust
Semantic caching layer for your LLM applications. Reuse responses and reduce token usage. - sensoris/semcache... (more…)
Read more »
I’m very happy to publish my first crate on crates.io, pretend :tada: Read more