My GPU is not large enough to run 2 vLLM instances. So I followed the quick start guide and started only 1 vLLM container instance. I tried the same query multiple times using openai_chat_completion_client.py. I had expected the TTFT of the first response would be larger since handling the same query could utilize saved kv cache by LMCache. However, TTFT remains the same.
Is my understanding of LMCache correct? Does LMCache work only with multiple vLLM instances?
Thanks.
My GPU is not large enough to run 2 vLLM instances. So I followed the quick start guide and started only 1 vLLM container instance. I tried the same query multiple times using openai_chat_completion_client.py. I had expected the TTFT of the first response would be larger since handling the same query could utilize saved kv cache by LMCache. However, TTFT remains the same.
Is my understanding of LMCache correct? Does LMCache work only with multiple vLLM instances?
Thanks.