-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Description
What is the issue?
I have recently started evaluating linkerd to use as service Mesh on dev cluster. We are using Kong ingress currently in our K8s cluster. Hence, I followed the steps to integrate linkerd with kong in K8s cluster.
In order to proceed this:
I have created kongplugin for setting l5d-dst-header with service URL of our API and injected linkerd proxy into the app namespace as well as in the kong (injected as ingress).
Calling the api from REST Client, I am getting 404 not found. I do see that request is passing from kong linkerd proxy to the app linkerd proxy, however it is failing to find the route.
No HttpRoute or Server resource is yet explored. Using latest stable version: 2.12.0.
[ 12849.182967s] INFO ThreadId(01) inbound:server{port=4143}:rescue{client.addr=172.16.26.50:53956}: linkerd_app_core::errors::respond: Request failed error=no route found for request
Another observation, if I am remove the opaque ports from app service and pods, the request didn't even reach to the linkerd-proxy of the app.
How can it be reproduced?
Steps to reproduce:
Basic set up of the kong+linkerd injection but with our own service which is working fine without mesh
Logs, error output, etc
Some logs from linkerd-proxy of the app with debug logs:
[ 1588.968384s] DEBUG ThreadId(01) inbound:accept{client.addr=172.16.26.50:41504}:server{port=4143}:direct: linkerd_transport_header::server: Read transport header header=TransportHeader { port: 4444, name: None, protocol: Some(Http2) }
[ 1588.968394s] DEBUG ThreadId(01) inbound:accept{client.addr=172.16.26.50:41504}:server{port=4143}:direct: linkerd_proxy_http::server: Creating HTTP service version=H2
[ 1588.968410s] DEBUG ThreadId(01) inbound:accept{client.addr=172.16.26.50:41504}:server{port=4143}:direct: linkerd_proxy_http::server: Handling as HTTP version=H2
[ 1588.968425s] DEBUG ThreadId(01) inbound:accept{client.addr=172.16.26.50:41504}:server{port=4143}:direct: h2::codec::framed_write: send frame=Settings { flags: (0x0), initial_window_size: 65535, max_frame_size: 16384, max_header_list_size: 16777216 }
[ 1588.968477s] DEBUG ThreadId(01) inbound:accept{client.addr=172.16.26.50:41504}:server{port=4143}:direct:Connection{peer=Server}: h2::codec::framed_read: received frame=Settings { flags: (0x0), enable_push: 0, initial_window_size: 65535, max_frame_size: 16384 }
[ 1588.968486s] DEBUG ThreadId(01) inbound:accept{client.addr=172.16.26.50:41504}:server{port=4143}:direct:Connection{peer=Server}: h2::codec::framed_write: send frame=Settings { flags: (0x1: ACK) }
[ 1588.968489s] DEBUG ThreadId(01) inbound:accept{client.addr=172.16.26.50:41504}:server{port=4143}:direct:Connection{peer=Server}: h2::codec::framed_read: received frame=WindowUpdate { stream_id: StreamId(0), size_increment: 983041 }
[ 1588.968517s] DEBUG ThreadId(01) inbound:accept{client.addr=172.16.26.50:41504}:server{port=4143}:direct:Connection{peer=Server}: h2::codec::framed_read: received frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
[ 1588.968530s] DEBUG ThreadId(01) inbound:accept{client.addr=172.16.26.50:41504}:server{port=4143}:direct:Connection{peer=Server}: h2::codec::framed_read: received frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) }
[ 1588.968537s] DEBUG ThreadId(01) inbound:accept{client.addr=172.16.26.50:41504}:server{port=4143}:direct:Connection{peer=Server}: h2::codec::framed_write: send frame=WindowUpdate { stream_id: StreamId(0), size_increment: 983041 }
[ 1588.968582s] DEBUG ThreadId(01) inbound:accept{client.addr=172.16.26.50:41504}:server{port=4143}:direct:http{v=h2}: linkerd_proxy_http::orig_proto: translating HTTP2 to orig-proto: "HTTP/1.1"
[ 1588.968608s] INFO ThreadId(01) inbound:accept{client.addr=172.16.26.50:41504}:server{port=4143}:direct:http{v=h2}:http{client.addr=172.16.26.50:41504 _client.id="linkerd-kong-serviceaccount.linkerd-kong.serviceaccount.identity.linkerd.cluster.local" timestamp=2022-11-11T05:04:33.9859064**_62Z method="POST" uri=http://linkerd-communication.linkerd-dev.svc.cluster.local/v0/accounts-or-emails version=HTTP/2.0 trace_id="" request_bytes="496" user_agent="insomnia/2022.6.0" host="linkerd-communication.linkerd-dev.svc.cluster.local"}:rescue{client.addr=172.16.26.50:41504}: linkerd_app_core::errors::respond: Request failed error=no route found for request_**
[ 1588.968617s] DEBUG ThreadId(01) inbound:accept{client.addr=172.16.26.50:41504}:**server{port=4143}:direct:http{v=h2}:http{client.addr=172.16.26.50:41504 client.id="linkerd-kong-serviceaccount.linkerd-kong.serviceaccount.identity.linkerd.cluster.local" timestamp=2022-11-11T05:04:33.985906462Z method="POST" uri=http://linkerd-communication.linkerd-dev.svc.cluster.local/v0/accounts-or-emails version=HTTP/2.0 trace_id="" request_bytes="496" user_agent="insomnia/2022.6.0" host="linkerd-communication.linkerd-dev.svc.cluster.local"}: linkerd_app_core::errors::respond: Handling error on HTTP connection status=404 Not Found version=HTTP/2.0 close=false_**
output of linkerd check -o short
Status check results are √
Environment
- Kubernetes version: 1.23.2
- Linkerd version: 2.12.2
- Hosted K8s cluster using Kubeadm
- Calico CNI
- Host OS: RHEL 8.5
Possible solution
I believe somewhere destination resolution is configured wrong. So need help
Additional context
No response
Would you like to work on fixing this bug?
maybe