Skip to content

CI: K8sDatapathConfig Encapsulation Check iptables masquerading with random-fully #17353

@maintainer-s-little-helper

Description

@maintainer-s-little-helper

Test Name

K8sDatapathConfig Encapsulation Check iptables masquerading with random-fully

Failure Output

FAIL: Pod "testclient-8rmwm" can not connect to "http://google.com"

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:518
Pod "testclient-8rmwm" can not connect to "http://google.com"
Expected command: kubectl exec -n 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera testclient-8rmwm -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://google.com -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 
To succeed, but it failed:
Exitcode: 7 
Err: exit status 7
Stdout:
 	 time-> DNS: '0.002990()', Connect: '0.000000',Transfer '0.000000', total '2.229656'
Stderr:
 	 command terminated with exit code 7
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/DatapathConfiguration.go:276

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 2
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.
Cilium pods: [cilium-c5t2t cilium-wk9qf]
Netpols loaded: 
CiliumNetworkPolicies loaded: 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera::l3-policy-demo 
Endpoint Policy Enforcement:
Pod                          Ingress   Egress
coredns-867bf6789f-ph95s               
test-k8s2-79ff876c9d-txkwv             
testclient-8rmwm                       
testclient-xmztn                       
testds-84x6d                           
testds-hfgmz                           
Cilium agent 'cilium-c5t2t': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 37 Failed 0
Cilium agent 'cilium-wk9qf': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 29 Failed 0


Standard Error

Click to show.
13:51:29 STEP: Installing Cilium
13:51:31 STEP: Waiting for Cilium to become ready
13:52:18 STEP: Validating if Kubernetes DNS is deployed
13:52:18 STEP: Checking if deployment is ready
13:52:18 STEP: Checking if pods have identity
13:52:18 STEP: Checking if kube-dns service is plumbed correctly
13:52:18 STEP: Checking if DNS can resolve
13:52:19 STEP: Kubernetes DNS is not ready: %!s(<nil>)
13:52:19 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
13:52:37 STEP: Waiting for Kubernetes DNS to become operational
13:52:37 STEP: Checking if deployment is ready
13:52:37 STEP: Checking if kube-dns service is plumbed correctly
13:52:37 STEP: Checking if DNS can resolve
13:52:37 STEP: Checking if pods have identity
13:52:40 STEP: Validating Cilium Installation
13:52:40 STEP: Performing Cilium controllers preflight check
13:52:40 STEP: Performing Cilium status preflight check
13:52:40 STEP: Performing Cilium health check
13:52:42 STEP: Performing Cilium service preflight check
13:52:42 STEP: Performing K8s service preflight check
13:52:43 STEP: Waiting for cilium-operator to be ready
13:52:43 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
13:52:43 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
13:52:43 STEP: Making sure all endpoints are in ready state
13:52:44 STEP: Creating namespace 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera
13:52:44 STEP: Deploying demo_ds.yaml in namespace 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera
13:52:45 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
13:52:49 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
13:52:49 STEP: WaitforNPods(namespace="202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
13:52:58 STEP: WaitforNPods(namespace="202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
13:52:58 STEP: Checking pod connectivity between nodes
13:52:58 STEP: WaitforPods(namespace="202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient")
13:52:58 STEP: WaitforPods(namespace="202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient") => <nil>
13:52:58 STEP: WaitforPods(namespace="202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS")
13:52:58 STEP: WaitforPods(namespace="202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS") => <nil>
13:53:03 STEP: Test iptables masquerading
13:53:03 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
13:53:03 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
13:53:03 STEP: WaitforNPods(namespace="202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
13:53:03 STEP: WaitforNPods(namespace="202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
13:53:03 STEP: Making ten curl requests from "testclient-8rmwm" to "http://google.com"
FAIL: Pod "testclient-8rmwm" can not connect to "http://google.com"
Expected command: kubectl exec -n 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera testclient-8rmwm -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://google.com -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 
To succeed, but it failed:
Exitcode: 7 
Err: exit status 7
Stdout:
 	 time-> DNS: '0.002990()', Connect: '0.000000',Transfer '0.000000', total '2.229656'
Stderr:
 	 command terminated with exit code 7
	 

=== Test Finished at 2021-09-09T13:53:09Z====
13:53:09 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
13:53:10 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                              READY   STATUS    RESTARTS   AGE    IP              NODE   NOMINATED NODE   READINESS GATES
	 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera   test-k8s2-79ff876c9d-txkwv        2/2     Running   0          29s    10.0.1.35       k8s2   <none>           <none>
	 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-8rmwm                  1/1     Running   0          29s    10.0.1.26       k8s2   <none>           <none>
	 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-xmztn                  1/1     Running   0          29s    10.0.0.19       k8s1   <none>           <none>
	 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-84x6d                      2/2     Running   0          29s    10.0.1.131      k8s2   <none>           <none>
	 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-hfgmz                      2/2     Running   0          29s    10.0.0.2        k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-d69c97b9b-pg698           0/1     Running   0          19m    10.0.1.11       k8s1   <none>           <none>
	 cilium-monitoring                                                 prometheus-655fb888d7-nk54t       1/1     Running   0          19m    10.0.1.27       k8s1   <none>           <none>
	 kube-system                                                       cilium-c5t2t                      1/1     Running   0          102s   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-fb784899f-498lk   1/1     Running   0          102s   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-fb784899f-bgldg   1/1     Running   0          102s   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-wk9qf                      1/1     Running   0          102s   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-867bf6789f-ph95s          1/1     Running   0          53s    10.0.1.20       k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                         1/1     Running   0          22m    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1               1/1     Running   0          22m    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1      1/1     Running   0          22m    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-bnhzj                  1/1     Running   0          20m    192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       kube-proxy-hfbhr                  1/1     Running   0          22m    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1               1/1     Running   0          22m    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-wjfvl                1/1     Running   0          20m    192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       log-gatherer-xk7p6                1/1     Running   0          20m    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-nrhvk              1/1     Running   0          20m    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-xwnlk              1/1     Running   0          20m    192.168.36.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-c5t2t cilium-wk9qf]
cmd: kubectl exec -n kube-system cilium-c5t2t -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.19 (v1.19.13) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::12, enp0s8 192.168.36.12 fd04::12 (Direct Routing)]
	 Host firewall:          Disabled
	 Cilium:                 Ok   1.10.90 (v1.10.90-c762736)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 6/254 allocated from 10.0.1.0/24, IPv6: 6/254 allocated from fd02::100/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      37/37 healthy
	 Proxy Status:           OK, ip 10.0.1.229, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 854/4095 (20.85%), Flows/s: 7.80   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-09-09T13:52:41Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-c5t2t -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                             
	 253        Enabled            Disabled          28638      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::161   10.0.1.131   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera                                    
	                                                            k8s:zgroup=testDS                                                                                                                  
	 289        Disabled           Disabled          62050      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::11c   10.0.1.35    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera                                    
	                                                            k8s:zgroup=test-k8s2                                                                                                               
	 1484       Disabled           Disabled          4          reserved:health                                                                                   fd02::1a8   10.0.1.97    ready   
	 1995       Disabled           Disabled          26606      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::175   10.0.1.20    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                        
	                                                            k8s:k8s-app=kube-dns                                                                                                               
	 2559       Disabled           Disabled          34195      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1bd   10.0.1.26    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera                                    
	                                                            k8s:zgroup=testDSClient                                                                                                            
	 2868       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                 ready   
	                                                            reserved:host                                                                                                                      
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-wk9qf -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.19 (v1.19.13) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.36.11 fd04::11 (Direct Routing)]
	 Host firewall:          Disabled
	 Cilium:                 Ok   1.10.90 (v1.10.90-c762736)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      29/29 healthy
	 Proxy Status:           OK, ip 10.0.0.174, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 773/4095 (18.88%), Flows/s: 6.91   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-09-09T13:52:43Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-wk9qf -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6       IPv4        STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                           
	 285        Disabled           Disabled          34195      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::65   10.0.0.19   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera                                  
	                                                            k8s:zgroup=testDSClient                                                                                                          
	 631        Disabled           Disabled          4          reserved:health                                                                                   fd02::9b   10.0.0.97   ready   
	 666        Enabled            Disabled          28638      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::3b   10.0.0.2    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera                                  
	                                                            k8s:zgroup=testDS                                                                                                                
	 3998       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                               ready   
	                                                            k8s:node-role.kubernetes.io/master                                                                                               
	                                                            reserved:host                                                                                                                    
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
13:54:20 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
13:54:20 STEP: Deleting deployment demo_ds.yaml
13:54:20 STEP: Deleting namespace 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera
13:54:34 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|b2fccc59_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-5.4//960/artifact/b2fccc59_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-5.4//960/artifact/test_results_Cilium-PR-K8s-1.19-kernel-5.4_960_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-5.4/960/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

Metadata

Metadata

Assignees

Labels

area/CIContinuous Integration testing issue or flakearea/datapathImpacts bpf/ or low-level forwarding details, including map management and monitor messages.ci/flakeThis is a known failure that occurs in the tree. Please investigate me!feature/ipv6Relates to IPv6 protocol support

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions