Skip to content

linkerd-proxy 2.11 crashes when the application issues requests resulting in Connection refused #7103

@kwencel

Description

@kwencel

Bug Report

What is the issue?

linkerd-proxy 2.11 crashes when the application it encapsulates requests resulting in Connection refused.

It also crashes on the latest edge-21.10.1.
It does not crash on stable-2.10, so I strongly think it is a regression from that version.

How can it be reproduced?

We have prepared a YAML which spawns a little Locust-based stress test (not really a stress test, as it requires very little requests per second to crash the linkerd-proxy). It tries to access the existing Kubernetes service (your own linkerd-identity in this example, but it can be anything) using a port which the service does not expose. As a result, the request ends with Connection refused and linkerd-proxy dies.

It crashes on our production infrastructure (on-prem Rancher provisioned cluster) but for simplicity we have also prepared a minikube-based scenario.

1. Spawn a minikube cluster

minikube start --cpus 2 --kubernetes-version v1.20.11 -p linkerdcrash

2. Install Linkerd 2.11

linkerd install | kubectl apply -f -

3. Run our YAML

curl -s https://gist.githubusercontent.com/kwencel/f141159ae710f85eb9ab7c54fee76a5e/raw/d37dd20518f015185b05e60d87d17dd40fdfd7c1/locust.yaml | kubectl apply -f -

Logs, error output, etc

time="2021-10-15T18:58:09Z" level=info msg="Found pre-existing key: /var/run/linkerd/identity/end-entity/key.p8"
time="2021-10-15T18:58:09Z" level=info msg="Found pre-existing CSR: /var/run/linkerd/identity/end-entity/csr.der"
[     0.000523s] ERROR ThreadId(01) linkerd_app::env: No inbound ports specified via LINKERD2_PROXY_INBOUND_PORTS
[     0.000745s]  INFO ThreadId(01) linkerd2_proxy::rt: Using single-threaded proxy runtime
[     0.001120s]  INFO ThreadId(01) linkerd2_proxy: Admin interface on 0.0.0.0:4191
[     0.001142s]  INFO ThreadId(01) linkerd2_proxy: Inbound interface on 0.0.0.0:4143
[     0.001144s]  INFO ThreadId(01) linkerd2_proxy: Outbound interface on 127.0.0.1:4140
[     0.001146s]  INFO ThreadId(01) linkerd2_proxy: Tap DISABLED
[     0.001147s]  INFO ThreadId(01) linkerd2_proxy: Local identity is default.default.serviceaccount.identity.linkerd.cluster.local
[     0.001151s]  INFO ThreadId(01) linkerd2_proxy: Identity verified via linkerd-identity-headless.linkerd.svc.cluster.local:8080 (linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local)
[     0.001153s]  INFO ThreadId(01) linkerd2_proxy: Destinations resolved via linkerd-dst-headless.linkerd.svc.cluster.local:8086 (linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local)
[     0.014938s]  INFO ThreadId(02) daemon:identity: linkerd_app: Certified identity: default.default.serviceaccount.identity.linkerd.cluster.local
[     0.829546s]  INFO ThreadId(01) outbound:server{orig_dst=10.98.134.224:12345}: linkerd_app_outbound::http::proxy_connection_close: Closing application connection for remote proxy error=Connection refused (os error 111)
[     0.831752s]  WARN ThreadId(01) outbound:server{orig_dst=10.98.134.224:12345}: linkerd_reconnect: Service failed error=channel closed

And shortly after linkerd-proxy dies.

linkerd check output

Linkerd core checks
===================

kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API

kubernetes-version
------------------
√ is running the minimum Kubernetes API version
√ is running the minimum kubectl version

linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ control plane pods are ready
√ cluster networks contains all node podCIDRs

linkerd-config
--------------
√ control plane Namespace exists
√ control plane ClusterRoles exist
√ control plane ClusterRoleBindings exist
√ control plane ServiceAccounts exist
√ control plane CustomResourceDefinitions exist
√ control plane MutatingWebhookConfigurations exist
√ control plane ValidatingWebhookConfigurations exist

linkerd-identity
----------------
√ certificate config is valid
√ trust anchors are using supported crypto algorithm
√ trust anchors are within their validity period
√ trust anchors are valid for at least 60 days
√ issuer cert is using supported crypto algorithm
√ issuer cert is within its validity period
√ issuer cert is valid for at least 60 days
√ issuer cert is issued by the trust anchor

linkerd-webhooks-and-apisvc-tls
-------------------------------
√ proxy-injector webhook has valid cert
√ proxy-injector cert is valid for at least 60 days
√ sp-validator webhook has valid cert
√ sp-validator cert is valid for at least 60 days
√ policy-validator webhook has valid cert
√ policy-validator cert is valid for at least 60 days

linkerd-version
---------------
√ can determine the latest version
√ cli is up-to-date

control-plane-version
---------------------
√ can retrieve the control plane version
√ control plane is up-to-date
√ control plane and cli versions match

linkerd-control-plane-proxy
---------------------------
√ control plane proxies are healthy
√ control plane proxies are up-to-date
√ control plane proxies and cli versions match

Status check results are √

Environment

  • Kubernetes Version: v1.20.11
  • Cluster Environment: on-prem RKE / minikube v1.23.2
  • Host OS: Debian 10 / minikube v1.23.2
  • Linkerd version: stable-2.11.0

Additional context

At first sight, it might look like a not-so-important bug, but we have encountered it in our production usage. We had a haproxy-based loadbalancer outside of the Kubernetes cluster which managed connections to several backends (also outside of the cluster).

We have observed that when the request from the k8s cluster to the loadbalancer was made during the restart of one of those backends, haproxy would often return an error, since it didn't have enough time to notice the backend was down. To our surprise, that error was triggering the linkerd-proxy of the caller to crash. We have reverted to 2.10 and it is working fine now.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions