-
Notifications
You must be signed in to change notification settings - Fork 3.7k
From 1.18.4 upgrade to 1.19 breaks nodeport ipv6 functionality #44436
Copy link
Copy link
Closed
Labels
affects/v1.19This issue affects v1.19 branchThis issue affects v1.19 brancharea/datapathImpacts bpf/ or low-level forwarding details, including map management and monitor messages.Impacts bpf/ or low-level forwarding details, including map management and monitor messages.area/kprAnything related to our kube-proxy replacement.Anything related to our kube-proxy replacement.area/loadbalancingImpacts load-balancing and Kubernetes service implementationsImpacts load-balancing and Kubernetes service implementationsfeature/ipv6Relates to IPv6 protocol supportRelates to IPv6 protocol supportkind/bugThis is a bug in the Cilium logic.This is a bug in the Cilium logic.kind/community-reportThis was reported by a user in the Cilium community, eg via Slack.This was reported by a user in the Cilium community, eg via Slack.kind/regressionThis functionality worked fine before, but was broken in a newer release of Cilium.This functionality worked fine before, but was broken in a newer release of Cilium.needs/triageThis issue requires triaging to establish severity and next steps.This issue requires triaging to establish severity and next steps.
Description
Is there an existing issue for this?
- I have searched the existing issues
Version
equal or higher than v1.19.0 and lower than v1.20.0
What happened?
After upgrading to 1.19 nodeport service are not accesible over ipv6 anymore. Tcpdump on the node reveals Reset packets sent back to client.
How can we reproduce the issue?
- install cilium
helm install cilium cilium/cilium \
--version 1.19.1 \
--namespace kube-system \
--set ipam.mode=kubernetes \
--set routingMode=native \
--set enableIPv4Masquerade=true \
--set enableIPv6Masquerade=true \
--set kubeProxyReplacement=true \
--set autoDirectNodeRoutes=true \
--set bpf.masquerade=true \
--set bpf.hostLegacyRouting=false \
--set ipv4NativeRoutingCIDR=172.25.0.0/16 \
--set ipv6NativeRoutingCIDR=fdc8:56d5:bf9c::/48 \
--set requireIPv4PodCIDR=false \
--set requireIPv6PodCIDR=false \
--set l7Proxy=true \
--set ipv6.enabled=true \
--set envoy.enabled=true \
--set hubble.enabled=true \
--set hubble.relay.enabled=true \
--set operator.identityHeartbeatTimeout=5m \
--set prometheus.enabled=true \
--set localRedirectPolicies.enabled=true
- create a pod and a nodeport service
kubectl create service nodeport http-echo --tcp=5678:5678
kubectl patch service http-echo --type=merge -p '{ "spec": { "ipFamilyPolicy": "PreferDualStack", "ipFamilies": [ "IPv4", "IPv6" ] } }'
kubectl run -l 'app=http-echo' --restart=Never --image hashicorp/http-echo:alpine http-echo -- -text="here I am"
- curl the nodeport of the service over ipv6.
curl -siv -m 2 -6 http://[ipv6]:31546
* Failed to connect to <ipv6> port 31546 after 35 ms: Error
while it's working as expected over ipv4
Cilium Version
1.19.1
Kernel Version
Linux node-c97577568-fhmb6 6.8.0-100-generic #100-Ubuntu SMP PREEMPT_DYNAMIC Tue Jan 13 16:40:06 UTC 2026 x86_64 x86_64 x86_64 GNU/Linux
Kubernetes Version
1.34.3
Regression
1.18.4
Sysdump
cilium-sysdump-20260219-124651.zip
Relevant log output
Anything else?
No response
Cilium Users Document
- Are you a user of Cilium? Please add yourself to the Users doc
Code of Conduct
- I agree to follow this project's Code of Conduct
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
affects/v1.19This issue affects v1.19 branchThis issue affects v1.19 brancharea/datapathImpacts bpf/ or low-level forwarding details, including map management and monitor messages.Impacts bpf/ or low-level forwarding details, including map management and monitor messages.area/kprAnything related to our kube-proxy replacement.Anything related to our kube-proxy replacement.area/loadbalancingImpacts load-balancing and Kubernetes service implementationsImpacts load-balancing and Kubernetes service implementationsfeature/ipv6Relates to IPv6 protocol supportRelates to IPv6 protocol supportkind/bugThis is a bug in the Cilium logic.This is a bug in the Cilium logic.kind/community-reportThis was reported by a user in the Cilium community, eg via Slack.This was reported by a user in the Cilium community, eg via Slack.kind/regressionThis functionality worked fine before, but was broken in a newer release of Cilium.This functionality worked fine before, but was broken in a newer release of Cilium.needs/triageThis issue requires triaging to establish severity and next steps.This issue requires triaging to establish severity and next steps.