Description:
The kubernetes-sigs/llm-instance-gateway project has introduced a new backendRef called LLMServerPool, representing a collection of model servers inside Kubernetes, that can be routed to, from an HTTPRoute, and is looking for envoy proxy based implementations to support routing to this backendRef natively. More in kubernetes-sigs/gateway-api-inference-extension#19
Creating this issue, to decide on whether Envoy Gateway should add support for this