Skip to content

IPv6 L2 announcement not responding to NDP requests, SKB_DROP_REASON_IPV6_NDISC_NS_OTHERHOST #43774

@morpig

Description

@morpig

Is there an existing issue for this?

  • I have searched the existing issues

Version

v1.19.0-pre.4

What happened?

We are testing a full IPv6 only cluster including External IPs for gateway.

Our setup is a mix of on-prem & AWS nodes, routed over vxlan for cross-node comms.

However, when deploying a service with external IP allocated by Cilium's IPAM, it is not reachable from external networks.

In our case, we have Node A (on-prem) with IPv6 setup as follows
- Network iface to router: ens16
- Allocated from router: redacted::a0/64
- Node IP: redacted:a0:f000::1
- IPAM pool range: redacted:a0:f000:e000::/112

No issue in accessing Service External IP from other nodes:

# curl [redacted:f000:e000::] -v
*   Trying [redacted:f000:e000::]:80...
* Connected to redacted:f000:e000:: (redacted:f000:e000::) port 80
> GET / HTTP/1.1
> Host: [redacted:f000:e000::]
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< date: Wed, 14 Jan 2026 17:10:23 GMT
< content-length: 0

When accessing from external networks:

# curl [redacted:f000:e000::]
curl: (7) Failed to connect to redacted:f000:e000:: port 80 after 3070 ms: No route to host

we can confirm cilium agent at Node A leased the l2announcement with the correct iface:

# kubectl exec pods/cilium-4rlrt -n kube-system -- cilium-dbg shell -- db/show l2-announce
IP                             NetworkInterface
redacted:f000:e000::   ens16

device policy selector confirms ens16 as selected:

# kubectl exec pods/cilium-4rlrt -n kube-system -- cilium-dbg shell -- db/show devices
Name              Index   Selected   Type     MTU     HWAddr              Flags                               Addresses                                                           OperStatus
lo                1       false      device   65536                       up|loopback|running                 127.0.0.1, ::1                                                      unknown
ens16             2       true       device   1500    52:50:a3:9a:5f:ef   up|broadcast|multicast|running      public-ipv4, redacted:f000::1, fe80::5050:a3ff:fe9a:5fef   up
tailscale0        3       true       tuntap   1280                        up|pointtopoint|multicast|running   100.97.5.124, fd7a:115c:a1e0::1037:57c, fe80::e8d1:16ad:bf96:3761   unknown
cilium_net        4       false      veth     1500    e6:d9:29:59:b5:f1   up|broadcast|multicast|running      fe80::e4d9:29ff:fe59:b5f1                                           up
cilium_host       5       false      veth     1500    e2:a0:b0:91:05:7d   up|broadcast|multicast|running      fd00::1fb                                                           up
cilium_vxlan      6       false      vxlan    1500    46:2d:26:83:e2:59   up|broadcast|multicast|running      fe80::442d:26ff:fe83:e259                                           unknown
lxcb74eabdd685c   12      false      veth     1500    86:15:6f:6c:bc:28   up|broadcast|multicast|running      fe80::8415:6fff:fe6c:bc28                                           up
lxc0945d7c4e55a   14      false      veth     1500    ae:a4:d5:7d:79:3c   up|broadcast|multicast|running      fe80::aca4:d5ff:fe7d:793c                                           up
lxc86ace3de4332   16      false      veth     1500    02:07:d7:af:0b:88   up|broadcast|multicast|running      fe80::7:d7ff:feaf:b88                                               up
lxcce738733f2ea   18      false      veth     1500    de:02:f6:dd:72:a3   up|broadcast|multicast|running      fe80::dc02:f6ff:fedd:72a3                                           up
lxc13143c5d2f23   20      false      veth     1500    6a:a8:5b:0d:3d:79   up|broadcast|multicast|running      fe80::68a8:5bff:fe0d:3d79                                           up
lxc74003e980dce   28      false      veth     1500    6a:71:a9:8b:20:e4   up|broadcast|multicast|running      fe80::6871:a9ff:fe8b:20e4                                           up
lxc9ad71d2cb02e   32      false      veth     1500    ae:d0:f4:31:bb:1c   up|broadcast|multicast|running      fe80::acd0:f4ff:fe31:bb1c                                           up
lxc18de018533db   34      false      veth     1500    fe:15:2e:a6:8e:19   up|broadcast|multicast|running      fe80::fc15:2eff:fea6:8e19                                           up
lxc_health        56      false      veth     1500    b2:37:7c:c2:8a:d7   up|broadcast|multicast|running      fe80::b037:7cff:fec2:8ad7                                           up

pwru seeing NDP requests coming in:

0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  86    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) eth_type_trans
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  72    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) skb_defer_rx_timestamp
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  86    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) skb_ensure_writable
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  86    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) skb_ensure_writable
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  72    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) ip6_rcv_core
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  72    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) nf_hook_slow
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  72    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) nf_ip6_checksum
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  72    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) __skb_checksum_complete
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  72    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) ip6_route_input
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  72    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) ip6_mc_input
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  72    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) ip6_input
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  72    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) nf_hook_slow
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  72    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) ip6_input_finish
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  72    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) ip6_protocol_deliver_rcu
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  32    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) raw6_local_deliver
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  32    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) ipv6_raw_deliver
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  32    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) icmpv6_rcv
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  32    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) __pskb_pull_tail
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  24    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) ndisc_rcv
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  24    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) __pskb_pull_tail
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  32    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) ndisc_recv_ns
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  32    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) kfree_skb_reason(SKB_DROP_REASON_IPV6_NDISC_NS_OTHERHOST)
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  32    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) skb_release_head_state
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  32    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) skb_release_data
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  32    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) skb_free_head
0xffff8d0ec469dd00 3   <empty>:0        4026531840 0            ens16:2      0x86dd 1500  32    [redacted:2a03:2880:0:f329]:0->[ff02::1:ff00:0]:0(icmp6) kfree_skbmem

tcpdump version:

17:18:47.639811 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 32) redacted:2a03:2880:0:f329 > ff02::1:ff00:0: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has redacted:f000:e000::
          source link-address option (1), length 8 (1): c2:04:0b:f1:ed:c8
            0x0000:  c204 0bf1 edc8

we're lost here, please guide us for more debug info.

How can we reproduce the issue?

we're running cilium + envoy gateway

kubeProxyReplacement: true
routingMode: tunnel
enableIPv6Masquerade: true

underlayProtocol: "ipv6"
tunnelProtocol: "vxlan"

devices: eth+,ens+,tailscale0
MTU: 1500

ipv4:
  enabled: false

ipv6:
  enabled: true

gatewayAPI:
  enabled: true

cni:
  exclusive: false

envoy:
  enabled: false

socketLB:
  enabled: true
  hostNamespaceOnly: true
  lbMapMax: 131072

operator:
  replicas: 2
  rollOutPods: true

l2announcements:
  enabled: true

l2NeighDiscovery:
  enabled: true

Cilium Version

v1.19.0-pre.4

Kernel Version

ubuntu 24 + 6.14.0-1018-aws

Kubernetes Version

v1.34.3+k3s1
containerd://2.1.5-k3s1

Regression

No response

Sysdump

No response

Relevant log output

Anything else?

No response

Cilium Users Document

  • Are you a user of Cilium? Please add yourself to the Users doc

Code of Conduct

  • I agree to follow this project's Code of Conduct

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions