Skip to content

CI: K8sServicesTest Checks service across nodes Tests NodePort BPF Tests with direct routing Tests with secondary NodePort device #12127

@pchaigno

Description

@pchaigno

Over the past 14 days, happened once in master and at least once in PRs:
https://jenkins.cilium.io/job/cilium-ginkgo/job/cilium/job/master/5046/testReport/Suite-k8s-1/11/K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_direct_routing_Tests_with_secondary_NodePort_device/
https://jenkins.cilium.io/job/Cilium-PR-Ginkgo-Tests-Kernel/2145/testReport/Suite-k8s-1/17/K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_direct_routing_Tests_with_secondary_NodePort_device/

Could be a duplicate of #8945?

Stacktrace

/home/jenkins/workspace/cilium-ginkgo_cilium_master/k8s-1.11-gopath/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:512
Request from k8s1 to service tftp://[::ffff:127.0.0.1]:30151/hello failed
Expected command: kubectl exec -n kube-system log-gatherer-6c79s -- /bin/sh -c 'set -e; for i in $(seq 1 10); do curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[::ffff:127.0.0.1]:30151/hello; done' 
To succeed, but it failed:
Exitcode: 28 
Stdout:
 	 
	 Hostname: testds-hz59j
	 
	 Request Information:
	 	client_address=192.168.36.11
	 	client_port=55510
	 	real path=/hello
	 	request_scheme=tftp
	 
	 
	 Hostname: testds-hz59j
	 
	 Request Information:
	 	client_address=192.168.36.11
	 	client_port=52738
	 	real path=/hello
	 	request_scheme=tftp
	 
	 
	 Hostname: testds-hz59j
	 
	 Request Information:
	 	client_address=192.168.36.11
	 	client_port=33402
	 	real path=/hello
	 	request_scheme=tftp
	 
	 
	 Hostname: testds-hz59j
	 
	 Request Information:
	 	client_address=192.168.36.11
	 	client_port=47691
	 	real path=/hello
	 	request_scheme=tftp
	 
	 
	 Hostname: testds-hz59j
	 
	 Request Information:
	 	client_address=192.168.36.11
	 	client_port=50861
	 	real path=/hello
	 	request_scheme=tftp
	 
	 
Stderr:
 	 command terminated with exit code 28
	 

/home/jenkins/workspace/cilium-ginkgo_cilium_master/k8s-1.11-gopath/src/github.com/cilium/cilium/test/k8sT/Services.go:676

Standard Output

Number of "context deadline exceeded" in logs: 0
⚠️  Number of "level=error" in logs: 8
⚠️  Number of "level=warning" in logs: 54
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 5 errors/warnings:
Deleting no longer present service
Mutation detector is enabled, this will result in memory leakage.
Unable to enqueue endpoint policy visibility event
Hubble server will be exposing its API insecurely on this address
BPF system config check: NOT OK.
Cilium pods: [cilium-8s7pc cilium-zfp98]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
prometheus-556f8994ff-vjjsm             
test-k8s2-848b6f7864-wclfd              
testclient-jdxfl                        
testclient-pvq76                        
testds-ffr2t                            
testds-hz59j                            
coredns-687db6485c-fvvfj                
grafana-5968d99cc4-zr9qn                
Cilium agent 'cilium-8s7pc': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0
Cilium agent 'cilium-zfp98': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0

Standard error

See standard error
21:49:30 STEP: Installing Cilium
21:49:31 STEP: Waiting for Cilium to become ready
21:49:31 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
21:49:36 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
21:49:41 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
21:49:46 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
21:49:51 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
21:49:56 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
21:50:01 STEP: Number of ready Cilium pods: 2
21:50:01 STEP: Validating if Kubernetes DNS is deployed
21:50:01 STEP: Checking if deployment is ready
21:50:01 STEP: Checking if kube-dns service is plumbed correctly
21:50:01 STEP: Checking if DNS can resolve
21:50:01 STEP: Checking if pods have identity
21:50:02 STEP: Kubernetes DNS is up and operational
21:50:02 STEP: Validating Cilium Installation
21:50:02 STEP: Performing Cilium controllers preflight check
21:50:02 STEP: Performing Cilium status preflight check
21:50:02 STEP: Performing Cilium health check
21:50:04 STEP: Performing Cilium service preflight check
21:50:04 STEP: Performing K8s service preflight check
21:50:05 STEP: Waiting for cilium-operator to be ready
21:50:05 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
21:50:05 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
21:50:06 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.36.11:31686"
21:50:06 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.36.12:31686"
21:50:06 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:192.168.36.12]:31686"
21:50:06 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.36.11:30151/hello"
21:50:06 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.37.11:30151/hello"
21:50:06 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:192.168.36.11]:30151/hello"
21:50:06 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:192.168.36.11]:31686"
21:50:06 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:192.168.36.12]:30151/hello"
21:50:06 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.36.12:30151/hello"
21:50:06 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.37.11:31686"
21:50:06 STEP: Making 10 HTTP requests from outside cluster to "tftp://192.168.37.12:30151/hello"
21:50:06 STEP: Making 10 HTTP requests from outside cluster to "http://192.168.36.12:31686"
21:50:06 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.37.12:31686"
21:50:06 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.37.12:30151/hello"
21:50:06 STEP: Making 10 HTTP requests from outside cluster to "http://192.168.36.11:31686"
21:50:06 STEP: Making 10 HTTP requests from outside cluster to "tftp://192.168.36.11:30151/hello"
21:50:06 STEP: Making 10 HTTP requests from outside cluster to "http://192.168.37.11:31686"
21:50:06 STEP: Making 10 HTTP requests from outside cluster to "tftp://192.168.36.12:30151/hello"
21:50:06 STEP: Making 10 HTTP requests from outside cluster to "tftp://192.168.37.11:30151/hello"
21:50:06 STEP: Making 10 HTTP requests from outside cluster to "http://192.168.37.12:31686"
21:50:06 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://127.0.0.1:31686"
21:50:06 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://10.105.174.138:10080"
21:50:06 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://10.105.174.138:10069/hello"
21:50:06 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:127.0.0.1]:31686"
21:50:06 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://127.0.0.1:30151/hello"
21:50:06 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:127.0.0.1]:30151/hello"
21:50:07 STEP: Making 10 curl requests from testclient-jdxfl pod to service tftp://10.105.174.138:10069/hello
21:50:07 STEP: Making 10 curl requests from testclient-jdxfl pod to service http://10.105.174.138:10080
21:50:07 STEP: Making 10 curl requests from testclient-jdxfl pod to service http://192.168.36.11:31686
21:50:07 STEP: Making 10 curl requests from testclient-jdxfl pod to service tftp://192.168.36.11:30151/hello
21:50:07 STEP: Making 10 curl requests from testclient-jdxfl pod to service http://[::ffff:192.168.36.11]:31686
21:50:07 STEP: Making 10 curl requests from testclient-jdxfl pod to service tftp://[::ffff:192.168.36.11]:30151/hello
21:50:07 STEP: Making 10 curl requests from testclient-jdxfl pod to service http://192.168.36.12:31686
21:50:07 STEP: Making 10 curl requests from testclient-jdxfl pod to service tftp://192.168.36.12:30151/hello
21:50:07 STEP: Making 10 curl requests from testclient-jdxfl pod to service http://[::ffff:192.168.36.12]:31686
21:50:07 STEP: Making 10 curl requests from testclient-jdxfl pod to service tftp://[::ffff:192.168.36.12]:30151/hello
21:50:09 STEP: Making 10 curl requests from testclient-pvq76 pod to service http://10.105.174.138:10080
21:50:10 STEP: Making 10 curl requests from testclient-pvq76 pod to service http://192.168.36.11:31686
21:50:10 STEP: Making 10 curl requests from testclient-pvq76 pod to service tftp://192.168.36.11:30151/hello
21:50:10 STEP: Making 10 curl requests from testclient-pvq76 pod to service tftp://10.105.174.138:10069/hello
21:50:10 STEP: Making 10 curl requests from testclient-pvq76 pod to service http://[::ffff:192.168.36.12]:31686
21:50:11 STEP: Making 10 curl requests from testclient-pvq76 pod to service tftp://[::ffff:192.168.36.11]:30151/hello
21:50:11 STEP: Making 10 curl requests from testclient-pvq76 pod to service tftp://[::ffff:192.168.36.12]:30151/hello
21:50:11 STEP: Making 10 curl requests from testclient-pvq76 pod to service http://192.168.36.12:31686
21:50:11 STEP: Making 10 curl requests from testclient-pvq76 pod to service http://[::ffff:192.168.36.11]:31686
21:50:12 STEP: Making 10 curl requests from testclient-pvq76 pod to service tftp://192.168.36.12:30151/hello
FAIL: Request from k8s1 to service tftp://[::ffff:127.0.0.1]:30151/hello failed
Expected command: kubectl exec -n kube-system log-gatherer-6c79s -- /bin/sh -c 'set -e; for i in $(seq 1 10); do curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[::ffff:127.0.0.1]:30151/hello; done' 
To succeed, but it failed:
Exitcode: 28 
Stdout:
 	 
	 Hostname: testds-hz59j
	 
	 Request Information:
	 	client_address=192.168.36.11
	 	client_port=55510
	 	real path=/hello
	 	request_scheme=tftp
	 
	 
	 Hostname: testds-hz59j
	 
	 Request Information:
	 	client_address=192.168.36.11
	 	client_port=52738
	 	real path=/hello
	 	request_scheme=tftp
	 
	 
	 Hostname: testds-hz59j
	 
	 Request Information:
	 	client_address=192.168.36.11
	 	client_port=33402
	 	real path=/hello
	 	request_scheme=tftp
	 
	 
	 Hostname: testds-hz59j
	 
	 Request Information:
	 	client_address=192.168.36.11
	 	client_port=47691
	 	real path=/hello
	 	request_scheme=tftp
	 
	 
	 Hostname: testds-hz59j
	 
	 Request Information:
	 	client_address=192.168.36.11
	 	client_port=50861
	 	real path=/hello
	 	request_scheme=tftp
	 
	 
Stderr:
 	 command terminated with exit code 28
	 

=== Test Finished at 2020-06-09T21:50:32Z====
21:50:32 STEP: Running JustAfterEach block for EntireTestsuite K8sServicesTest
===================== TEST FAILED =====================
21:50:38 STEP: Running AfterFailed block for EntireTestsuite K8sServicesTest
21:50:38 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
21:50:39 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
21:50:39 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
21:50:40 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS    RESTARTS   AGE   IP              NODE   NOMINATED NODE
	 cilium-monitoring   grafana-5968d99cc4-zr9qn           1/1     Running   0          1h    10.0.1.218      k8s1   <none>
	 cilium-monitoring   prometheus-556f8994ff-vjjsm        1/1     Running   0          1h    10.0.1.191      k8s1   <none>
	 default             test-k8s2-848b6f7864-wclfd         2/2     Running   0          9m    10.0.0.26       k8s2   <none>
	 default             testclient-jdxfl                   1/1     Running   0          9m    10.0.1.206      k8s1   <none>
	 default             testclient-pvq76                   1/1     Running   0          9m    10.0.0.231      k8s2   <none>
	 default             testds-ffr2t                       2/2     Running   0          2m    10.0.1.81       k8s1   <none>
	 default             testds-hz59j                       2/2     Running   0          2m    10.0.0.96       k8s2   <none>
	 kube-system         cilium-8s7pc                       1/1     Running   0          1m    192.168.36.12   k8s2   <none>
	 kube-system         cilium-operator-5c59bd8f8c-46t6p   1/1     Running   0          1m    192.168.36.12   k8s2   <none>
	 kube-system         cilium-zfp98                       1/1     Running   0          1m    192.168.36.11   k8s1   <none>
	 kube-system         coredns-687db6485c-fvvfj           1/1     Running   0          1h    10.0.0.154      k8s2   <none>
	 kube-system         etcd-k8s1                          1/1     Running   0          1h    192.168.36.11   k8s1   <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running   0          1h    192.168.36.11   k8s1   <none>
	 kube-system         kube-controller-manager-k8s1       1/1     Running   0          1h    192.168.36.11   k8s1   <none>
	 kube-system         kube-scheduler-k8s1                1/1     Running   0          1h    192.168.36.11   k8s1   <none>
	 kube-system         log-gatherer-6c79s                 1/1     Running   0          1h    192.168.36.11   k8s1   <none>
	 kube-system         log-gatherer-d28vk                 1/1     Running   0          1h    192.168.36.12   k8s2   <none>
	 kube-system         log-gatherer-qggqx                 1/1     Running   0          1h    192.168.36.13   k8s3   <none>
	 kube-system         registry-adder-4lfz9               1/1     Running   0          1h    192.168.36.13   k8s3   <none>
	 kube-system         registry-adder-dbsdh               1/1     Running   0          1h    192.168.36.12   k8s2   <none>
	 kube-system         registry-adder-lv2rf               1/1     Running   0          1h    192.168.36.11   k8s1   <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-8s7pc cilium-zfp98]
21:50:41 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
21:50:41 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
21:50:41 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
21:50:41 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
21:50:42 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
21:50:42 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
21:50:43 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
21:50:43 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
21:50:43 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
21:50:43 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
21:50:44 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
21:50:44 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
21:50:45 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
21:50:45 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
21:50:45 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
21:50:45 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
21:51:02 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs --field-selector='spec.nodeName!=k8s3'")
21:51:02 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs --field-selector='spec.nodeName!=k8s3'") => <nil>
21:51:02 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs --field-selector='spec.nodeName!=k8s3'")
21:51:02 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs --field-selector='spec.nodeName!=k8s3'") => <nil>
21:51:03 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs --field-selector='spec.nodeName!=k8s3'")
21:51:03 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs --field-selector='spec.nodeName!=k8s3'") => <nil>
cmd: kubectl exec -n kube-system cilium-8s7pc -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend               Service Type   Backend                   
	 1    10.96.0.1:443          ClusterIP      1 => 192.168.36.11:6443   
	 2    10.96.0.10:53          ClusterIP      1 => 10.0.0.154:53        
	 3    10.103.46.88:3000      ClusterIP      1 => 10.0.1.218:3000      
	 4    10.104.46.46:9090      ClusterIP      1 => 10.0.1.191:9090      
	 5    192.168.36.11:20080    ExternalIPs    1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 6    192.168.36.11:20069    ExternalIPs    1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 7    192.168.37.12:30909    NodePort       1 => 10.0.0.96:80         
	 8    192.168.37.12:30811    NodePort       1 => 10.0.0.96:69         
	 9    10.99.47.23:80         ClusterIP      1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 10   10.99.47.23:69         ClusterIP      1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 11   10.105.174.138:10080   ClusterIP      1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 12   10.105.174.138:10069   ClusterIP      1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 13   192.168.36.12:31686    NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 15   0.0.0.0:31686          NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 16   0.0.0.0:30151          NodePort       1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 17   192.168.36.12:30151    NodePort       1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 19   10.104.207.11:10069    ClusterIP      1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 20   10.104.207.11:10080    ClusterIP      1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 22   192.168.36.12:31904    NodePort       1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 23   0.0.0.0:31904          NodePort       1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 24   0.0.0.0:30198          NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 25   192.168.36.12:30198    NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 27   10.100.168.224:10080   ClusterIP      1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 28   10.100.168.224:10069   ClusterIP      1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 30   192.168.36.12:30909    NodePort       1 => 10.0.0.96:80         
	 31   0.0.0.0:30909          NodePort       1 => 10.0.0.96:80         
	 32   192.168.36.12:30811    NodePort       1 => 10.0.0.96:69         
	 34   0.0.0.0:30811          NodePort       1 => 10.0.0.96:69         
	 35   10.99.147.49:10080     ClusterIP      1 => 10.0.0.26:80         
	 36   10.99.147.49:10069     ClusterIP      1 => 10.0.0.26:69         
	 37   192.168.36.12:31463    NodePort       1 => 10.0.0.26:80         
	 39   0.0.0.0:31463          NodePort       1 => 10.0.0.26:80         
	 40   192.168.36.12:31887    NodePort       1 => 10.0.0.26:69         
	 42   0.0.0.0:31887          NodePort       1 => 10.0.0.26:69         
	 43   10.98.104.36:10069     ClusterIP      1 => 10.0.0.26:69         
	 44   10.98.104.36:10080     ClusterIP      1 => 10.0.0.26:80         
	 45   192.168.36.12:31349    NodePort       1 => 10.0.0.26:69         
	 47   0.0.0.0:31349          NodePort       1 => 10.0.0.26:69         
	 48   192.168.36.12:31970    NodePort       1 => 10.0.0.26:80         
	 50   0.0.0.0:31970          NodePort       1 => 10.0.0.26:80         
	 51   10.108.221.123:80      ClusterIP      1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 53   0.0.0.0:30903          NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 54   192.168.36.12:30903    NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 55   10.101.43.190:80       ClusterIP      1 => 10.0.0.26:80         
	 56   192.168.36.12:31843    NodePort       1 => 10.0.0.26:80         
	 58   0.0.0.0:31843          NodePort       1 => 10.0.0.26:80         
	 59   10.98.120.47:20080     ClusterIP      1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 60   10.98.120.47:20069     ClusterIP      1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 61   192.0.2.233:20080      ExternalIPs    1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 62   192.0.2.233:20069      ExternalIPs    1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 64   0.0.0.0:32755          NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 65   192.168.36.12:32755    NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 66   192.168.36.12:31124    NodePort       1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 68   0.0.0.0:31124          NodePort       1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 69   192.168.36.12:8080     HostPort       1 => 10.0.0.26:80         
	 70   192.168.36.12:6969     HostPort       1 => 10.0.0.26:69         
	 71   192.168.37.12:30903    NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 72   192.168.37.12:30198    NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 73   192.168.37.12:31904    NodePort       1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 74   192.168.37.12:31887    NodePort       1 => 10.0.0.26:69         
	 75   192.168.37.12:31463    NodePort       1 => 10.0.0.26:80         
	 76   192.168.37.12:31970    NodePort       1 => 10.0.0.26:80         
	 77   192.168.37.12:31349    NodePort       1 => 10.0.0.26:69         
	 78   192.168.37.12:31843    NodePort       1 => 10.0.0.26:80         
	 79   192.168.37.12:31686    NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 80   192.168.37.12:30151    NodePort       1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 81   192.168.37.12:32755    NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 82   192.168.37.12:31124    NodePort       1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-8s7pc -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                       IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                            
	 218        Disabled           Disabled          4          reserved:health                                   fd00::4b   10.0.0.137   ready   
	 1232       Disabled           Disabled          16555      k8s:io.cilium.k8s.policy.cluster=default          fd00::c8   10.0.0.154   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                   
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                       
	                                                            k8s:k8s-app=kube-dns                                                              
	 2183       Disabled           Disabled          23024      k8s:io.cilium.k8s.policy.cluster=default          fd00::86   10.0.0.26    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                   
	                                                            k8s:io.kubernetes.pod.namespace=default                                           
	                                                            k8s:zgroup=test-k8s2                                                              
	 2474       Disabled           Disabled          5683       k8s:io.cilium.k8s.policy.cluster=default          fd00::cb   10.0.0.96    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                   
	                                                            k8s:io.kubernetes.pod.namespace=default                                           
	                                                            k8s:zgroup=testDS                                                                 
	 3093       Disabled           Disabled          1          reserved:host                                                             ready   
	 3254       Disabled           Disabled          10761      k8s:io.cilium.k8s.policy.cluster=default          fd00::33   10.0.0.231   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                   
	                                                            k8s:io.kubernetes.pod.namespace=default                                           
	                                                            k8s:zgroup=testDSClient                                                           
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-zfp98 -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend               Service Type   Backend                   
	 1    10.104.46.46:9090      ClusterIP      1 => 10.0.1.191:9090      
	 2    10.96.0.1:443          ClusterIP      1 => 192.168.36.11:6443   
	 3    10.96.0.10:53          ClusterIP      1 => 10.0.0.154:53        
	 4    10.103.46.88:3000      ClusterIP      1 => 10.0.1.218:3000      
	 5    192.168.36.11:20080    ExternalIPs    1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 6    192.168.36.11:20069    ExternalIPs    1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 7    192.168.37.11:31843    NodePort                                 
	 8    192.168.37.11:31463    NodePort                                 
	 9    10.99.47.23:69         ClusterIP      1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 10   10.99.47.23:80         ClusterIP      1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 11   10.105.174.138:10080   ClusterIP      1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 12   10.105.174.138:10069   ClusterIP      1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 13   192.168.36.11:31686    NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 15   0.0.0.0:31686          NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 16   192.168.36.11:30151    NodePort       1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 18   0.0.0.0:30151          NodePort       1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 19   10.104.207.11:10069    ClusterIP      1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 20   10.104.207.11:10080    ClusterIP      1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 21   0.0.0.0:30198          NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 22   192.168.36.11:30198    NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 25   0.0.0.0:31904          NodePort       1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 26   192.168.36.11:31904    NodePort       1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 27   10.100.168.224:10080   ClusterIP      1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 28   10.100.168.224:10069   ClusterIP      1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 29   192.168.36.11:30909    NodePort       1 => 10.0.1.81:80         
	 31   0.0.0.0:30909          NodePort       1 => 10.0.1.81:80         
	 32   192.168.36.11:30811    NodePort       1 => 10.0.1.81:69         
	 34   0.0.0.0:30811          NodePort       1 => 10.0.1.81:69         
	 35   10.99.147.49:10080     ClusterIP      1 => 10.0.0.26:80         
	 36   10.99.147.49:10069     ClusterIP      1 => 10.0.0.26:69         
	 38   192.168.36.11:31463    NodePort                                 
	 39   0.0.0.0:31463          NodePort                                 
	 41   192.168.36.11:31887    NodePort                                 
	 42   0.0.0.0:31887          NodePort                                 
	 43   10.98.104.36:10080     ClusterIP      1 => 10.0.0.26:80         
	 44   10.98.104.36:10069     ClusterIP      1 => 10.0.0.26:69         
	 45   192.168.36.11:31970    NodePort       1 => 10.0.0.26:80         
	 47   0.0.0.0:31970          NodePort       1 => 10.0.0.26:80         
	 49   0.0.0.0:31349          NodePort       1 => 10.0.0.26:69         
	 50   192.168.36.11:31349    NodePort       1 => 10.0.0.26:69         
	 51   10.108.221.123:80      ClusterIP      1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 52   192.168.36.11:30903    NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 54   0.0.0.0:30903          NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 55   10.101.43.190:80       ClusterIP      1 => 10.0.0.26:80         
	 56   192.168.36.11:31843    NodePort                                 
	 58   0.0.0.0:31843          NodePort                                 
	 59   10.98.120.47:20080     ClusterIP      1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 60   10.98.120.47:20069     ClusterIP      1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 61   192.0.2.233:20080      ExternalIPs    1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 62   192.0.2.233:20069      ExternalIPs    1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 64   0.0.0.0:32755          NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 65   192.168.36.11:32755    NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 66   192.168.36.11:31124    NodePort       1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 68   0.0.0.0:31124          NodePort       1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 69   192.168.37.11:31887    NodePort                                 
	 70   192.168.37.11:30903    NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 71   192.168.37.11:31686    NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 72   192.168.37.11:30151    NodePort       1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 73   192.168.37.11:30909    NodePort       1 => 10.0.1.81:80         
	 74   192.168.37.11:30811    NodePort       1 => 10.0.1.81:69         
	 75   192.168.37.11:30198    NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 76   192.168.37.11:31904    NodePort       1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 77   192.168.37.11:31970    NodePort       1 => 10.0.0.26:80         
	 78   192.168.37.11:31349    NodePort       1 => 10.0.0.26:69         
	 79   192.168.37.11:31124    NodePort       1 => 10.0.1.81:69         
	                                            2 => 10.0.0.96:69         
	 80   192.168.37.11:32755    NodePort       1 => 10.0.1.81:80         
	                                            2 => 10.0.0.96:80         
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-zfp98 -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                              IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                    
	 168        Disabled           Disabled          10761      k8s:io.cilium.k8s.policy.cluster=default                 fd00::13a   10.0.1.206   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                   
	                                                            k8s:zgroup=testDSClient                                                                   
	 372        Disabled           Disabled          16453      k8s:app=prometheus                                       fd00::112   10.0.1.191   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                    
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                         
	 692        Disabled           Disabled          5683       k8s:io.cilium.k8s.policy.cluster=default                 fd00::17f   10.0.1.81    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                   
	                                                            k8s:zgroup=testDS                                                                         
	 1283       Disabled           Disabled          6100       k8s:app=grafana                                          fd00::1d9   10.0.1.218   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                         
	 1672       Disabled           Disabled          1          reserved:host                                                                     ready   
	 2514       Disabled           Disabled          4          reserved:health                                          fd00::136   10.0.1.125   ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
21:51:03 STEP: Running AfterEach for block EntireTestsuite K8sServicesTest
21:51:03 STEP: Running AfterEach for block EntireTestsuite

1dd77f1b_K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_direct_routing_Tests_with_secondary_NodePort_device.zip
96a5b6da_K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_direct_routing_Tests_with_secondary_NodePort_device.zip

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/CIContinuous Integration testing issue or flakeci/flakeThis is a known failure that occurs in the tree. Please investigate me!

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions