-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Closed
linkerd/linkerd2-proxy
#2237Labels
Milestone
Description
What is the issue?
If a HTTP is sent to a port which is marked as opaque, the proxy returns a 404 response with no route found for request as the error, even if no Server resource is defined for that port.
How can it be reproduced?
- Install Linkerd and inject emojivoto
- Edit the
deploy/webpod template to add theconfig.linkerd.io/opauqe-ports: "8080"annotation - Exec into another meshed pod
kubectl -n emojivoto exec -it emoji-696d9d8f95-sg25b -c emoji-svc -- /bin/sh - Issue a curl request to web:
# curl -v web-svc.emojivoto.svc.cluster.local
[...]
* Trying 10.96.92.112...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x55fded8edf50)
* Connected to web-svc.emojivoto.svc.cluster.local (10.96.92.112) port 80 (#0)
> GET / HTTP/1.1
> Host: web-svc.emojivoto.svc.cluster.local
> User-Agent: curl/7.64.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< l5d-proxy-error: no route found for request
< date: Thu, 17 Nov 2022 22:53:06 GMT
< content-length: 0
<
Logs, error output, etc
Proxy logs from the web (incoming) proxy:
[ 34.704855s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.125.138:51892}:server{port=4143}:direct: linkerd_tls::server: Peeked bytes from TCP stream sz=0
[ 34.704877s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.125.138:51892}:server{port=4143}:direct: linkerd_tls::server: Attempting to buffer TLS ClientHello after incomplete peek
[ 34.704881s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.125.138:51892}:server{port=4143}:direct: linkerd_tls::server: Reading bytes from TCP stream buf.capacity=8192
[ 34.704888s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.125.138:51892}:server{port=4143}:direct: linkerd_tls::server: Read bytes from TCP stream buf.len=288
[ 34.706722s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.125.138:51892}:server{port=4143}:direct: linkerd_meshtls_rustls::server: Accepted TLS connection client.id=Some(ClientId(Name("default.emojivoto.serviceaccount.identity.linkerd.cluster.local"))) alpn=Some("transport.l5d.io/v1")
[ 34.706828s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.125.138:51892}:server{port=4143}:direct: linkerd_transport_header::server: Read transport header header=TransportHeader { port: 8080, name: None, protocol: Some(Http2) }
[ 34.706839s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.125.138:51892}:server{port=4143}:direct: linkerd_proxy_http::server: Creating HTTP service version=H2
[ 34.708448s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.125.138:51892}:server{port=4143}:direct: linkerd_proxy_http::server: Handling as HTTP version=H2
[ 34.709905s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.125.138:51892}:server{port=4143}:direct:http{v=h2}: linkerd_proxy_http::orig_proto: translating HTTP2 to orig-proto: "HTTP/1.1"
[ 34.709985s] INFO ThreadId(01) inbound:accept{client.addr=10.244.125.138:51892}:server{port=4143}:direct:http{v=h2}:http{client.addr=10.244.125.138:51892 client.id="default.emojivoto.serviceaccount.identity.linkerd.cluster.local" timestamp=2022-11-17T22:48:04.342423441Z method="GET" uri=http://10.244.125.130/ version=HTTP/2.0 trace_id="" request_bytes="" user_agent="curl/7.64.0" host="10.244.125.130"}:rescue{client.addr=10.244.125.138:51892}: linkerd_app_core::errors::respond: Request failed error=no route found for request
[ 34.710006s] DEBUG ThreadId(01) inbound:accept{client.addr=10.244.125.138:51892}:server{port=4143}:direct:http{v=h2}:http{client.addr=10.244.125.138:51892 client.id="default.emojivoto.serviceaccount.identity.linkerd.cluster.local" timestamp=2022-11-17T22:48:04.342423441Z method="GET" uri=http://10.244.125.130/ version=HTTP/2.0 trace_id="" request_bytes="" user_agent="curl/7.64.0" host="10.244.125.130"}: linkerd_app_core::errors::respond: Handling error on HTTP connection status=404 Not Found version=HTTP/2.0 close=false
output of linkerd check -o short
Status check results are √
Environment
kubectl version --short
Client Version: v1.24.3
Kustomize Version: v4.5.4
Server Version: v1.21.1
Possible solution
No response
Additional context
Notice that the inbound proxy is operating in "direct" mode because of the opaque ports annotation.
This issue was originally reported in #9811
Would you like to work on fixing this bug?
No response
Reactions are currently unavailable