Issue Type:
What happened:
With linkerd being used as an ingress controller for gRPC, the daemonset gradually stops routing gRPC requests.
What you expected to happen:
Linkerd should not stop routing gRPC requests.
How to reproduce it (as minimally and precisely as possible):
Our setup:
3 node Kubernetes cluster running linkerd in a daemonset.
Anything else we need to know?:
Stats for my daemonset over the past 7 days

There is currently nothing collecting the telemetry data.
Environment:
- linkerd/namerd version, config files:
This configuration is based on the concept of having an internal namespace and a public namespace. The public namespace contains publicly accessible microservices which talk to the internal microservices using linkerd as the mesh to link them all.
Configuration
admin:
ip: 0.0.0.0
port:
namers:
- kind: io.l5d.k8s
experimental: true
host: localhost
port: 8001
telemetry:
- kind: io.l5d.prometheus
- kind: io.l5d.recentRequests
sampleRate: 0.25
usage:
orgId: l5d-linkerd
routers:
- protocol: h2
label: out-public
experimental: true
dtab: |
/srv => /#/io.l5d.k8s/production-public/grpc;
/grpc => /srv;
/svc => /$/io.buoyant.http.domainToPathPfx/grpc;
identifier:
kind: io.l5d.header.path
segments: 1
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.daemonset
namespace: production-l5d
port: in-public
service: "l5d-linkerd-internal"
servers:
- port: 4140
ip: 0.0.0.0
clearContext: true
tls:
certPath: /io.buoyant/linkerd/certs/tls.crt
keyPath: /io.buoyant/linkerd/certs/tls.key
- protocol: h2
label: in-public
experimental: true
dtab: |
/srv => /#/io.l5d.k8s/production-public/grpc;
/grpc => /srv;
/svc => /$/io.buoyant.http.domainToPathPfx/grpc;
identifier:
kind: io.l5d.header.path
segments: 1
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
servers:
- port: 4141
ip: 0.0.0.0
- protocol: h2
label: out-internal
experimental: true
dtab: |
/srv => /#/io.l5d.k8s/production-internal/grpc;
/grpc => /srv;
/svc => /$/io.buoyant.http.domainToPathPfx/grpc;
identifier:
kind: io.l5d.header.path
segments: 1
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.daemonset
namespace: production-l5d
port: in-internal
service: "l5d-linkerd-internal"
servers:
- port: 4142
ip: 0.0.0.0
clearContext: true
- protocol: h2
label: in-internal
experimental: true
dtab: |
/srv => /#/io.l5d.k8s/production-internal/grpc;
/grpc => /srv;
/svc => /$/io.buoyant.http.domainToPathPfx/grpc;
identifier:
kind: io.l5d.header.path
segments: 1
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
servers:
- port: 4143
ip: 0.0.0.0
- protocol: http
label: http-out-int
dtab: |
/srv => /#/io.l5d.k8s/production-internal/http;
/host => /srv;
/svc => /host;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.daemonset
namespace: production-l5d
port: http-in-int
service: "l5d-linkerd-internal"
servers:
- port: 4240
ip: 0.0.0.0
clearContext: true
- protocol: http
label: http-in-int
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
dtab: |
/srv => /#/io.l5d.k8s/production-internal/http;
/host => /srv;
/svc => /host;
servers:
- port: 4241
ip: 0.0.0.0
Issue Type:
What happened:
With linkerd being used as an ingress controller for gRPC, the daemonset gradually stops routing gRPC requests.
What you expected to happen:
Linkerd should not stop routing gRPC requests.
How to reproduce it (as minimally and precisely as possible):
Our setup:
3 node Kubernetes cluster running linkerd in a daemonset.
Anything else we need to know?:

Stats for my daemonset over the past 7 days
There is currently nothing collecting the telemetry data.
Environment:
This configuration is based on the concept of having an internal namespace and a public namespace. The public namespace contains publicly accessible microservices which talk to the internal microservices using linkerd as the mesh to link them all.
Configuration
Platform, version, and config files (Kubernetes, DC/OS, etc):
Kubernetes v1.9.6
Linkerd 1.3.7
Cloud provider or hardware configuration:
Google Container Engine
Related issues:
Linkerd continues to talk to old endpoint after Kubernetes deployment #1730 (comment)
gRPC health-checking for mesh interface #1582