Skip to content

CI: net-next K8sPolicyTest Multi-node policy test validates fromEntities policies Validates fromEntities all policy #18520

@nbusseneau

Description

@nbusseneau

Test Name

K8sPolicyTest Multi-node policy test validates fromEntities policies Validates fromEntities all policy

Failure Output

FAIL: Can not connect to service "10.0.0.142" from outside cluster (1/1)

Stack Trace

/home/jenkins/workspace/cilium-master-k8s-1.23-kernel-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
Can not connect to service "10.0.0.142" from outside cluster (1/1)
Expected command: kubectl exec -n kube-system log-gatherer-9krf4 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 10.0.0.142 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 28 
Err: exit status 28
Stdout:
 	 time-> DNS: '0.000019(10.0.0.142)', Connect: '0.001862',Transfer '0.000000', total '20.001986'
Stderr:
 	 command terminated with exit code 28
	 

/home/jenkins/workspace/cilium-master-k8s-1.23-kernel-net-next/src/github.com/cilium/cilium/test/k8sT/service_helpers.go:299

Standard Output

Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: [cilium-dwg6w cilium-nz8b6]
Netpols loaded: 
CiliumNetworkPolicies loaded: default::cnp-default-deny-ingress default::from-entities-all 
Endpoint Policy Enforcement:
Pod                          Ingress   Egress
coredns-6874cd75d4-t2fc6               
test-k8s2-647d6dd9cd-vvflz             
testclient-2jqxd                       
testclient-57zfz                       
testds-4n8l8                           
testds-qtf6z                           
Cilium agent 'cilium-dwg6w': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 37 Failed 0
Cilium agent 'cilium-nz8b6': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 37 Failed 0

Standard Error

11:16:14 STEP: Installing default-deny ingress policy
11:16:16 STEP: Checking that remote-node is disallowed by default
11:16:16 STEP: Checking ingress connectivity from world to k8s1 pod
11:16:16 STEP: Checking ingress connectivity from k8s1 node to k8s1 pod (host)
11:16:16 STEP: Checking ingress connectivity from k8s1 pod to k8s2 pod
11:16:16 STEP: Checking ingress connectivity from k8s1 node to k8s2 pod (remote-node)
11:16:16 STEP: Adding a static route to 10.0.0.142 on the k8s3 node (outside)
11:16:16 STEP: Making 1 HTTP requests from outside cluster to "10.0.0.142"
11:16:21 STEP: Installing fromEntities all policy
11:16:22 STEP: Checking policy correctness
11:16:22 STEP: Checking ingress connectivity from world to k8s1 pod
11:16:22 STEP: Checking ingress connectivity from k8s1 node to k8s1 pod (host)
11:16:22 STEP: Checking ingress connectivity from k8s1 node to k8s2 pod (remote-node)
11:16:22 STEP: Checking ingress connectivity from k8s1 pod to k8s2 pod
11:16:22 STEP: Adding a static route to 10.0.0.142 on the k8s3 node (outside)
11:16:22 STEP: Making 1 HTTP requests from outside cluster to "10.0.0.142"
FAIL: Can not connect to service "10.0.0.142" from outside cluster (1/1)
Expected command: kubectl exec -n kube-system log-gatherer-9krf4 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 10.0.0.142 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 28 
Err: exit status 28
Stdout:
 	 time-> DNS: '0.000019(10.0.0.142)', Connect: '0.001862',Transfer '0.000000', total '20.001986'
Stderr:
 	 command terminated with exit code 28
	 

=== Test Finished at 2022-01-18T11:16:43Z====
11:16:43 STEP: Running JustAfterEach block for EntireTestsuite K8sPolicyTest
===================== TEST FAILED =====================
11:16:43 STEP: Running AfterFailed block for EntireTestsuite K8sPolicyTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS    RESTARTS   AGE    IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-6c7d4c9fd8-cps4m           0/1     Running   0          80m    10.0.0.134      k8s1   <none>           <none>
	 cilium-monitoring   prometheus-55777f54d9-cbsb5        1/1     Running   0          80m    10.0.0.202      k8s1   <none>           <none>
	 default             test-k8s2-647d6dd9cd-vvflz         2/2     Running   0          7m7s   10.0.1.229      k8s2   <none>           <none>
	 default             testclient-2jqxd                   1/1     Running   0          7m7s   10.0.0.167      k8s1   <none>           <none>
	 default             testclient-57zfz                   1/1     Running   0          7m7s   10.0.1.222      k8s2   <none>           <none>
	 default             testds-4n8l8                       2/2     Running   0          7m7s   10.0.0.142      k8s1   <none>           <none>
	 default             testds-qtf6z                       2/2     Running   0          7m7s   10.0.1.140      k8s2   <none>           <none>
	 kube-system         cilium-dwg6w                       1/1     Running   0          103s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-nz8b6                       1/1     Running   0          103s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-6ff4dbbb77-fftq8   1/1     Running   0          102s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-6ff4dbbb77-gcl9x   1/1     Running   0          103s   192.168.56.13   k8s3   <none>           <none>
	 kube-system         coredns-6874cd75d4-t2fc6           1/1     Running   0          20m    10.0.0.247      k8s1   <none>           <none>
	 kube-system         etcd-k8s1                          1/1     Running   0          84m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running   0          84m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1       1/1     Running   0          84m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-scheduler-k8s1                1/1     Running   0          84m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-9krf4                 1/1     Running   0          80m    192.168.56.13   k8s3   <none>           <none>
	 kube-system         log-gatherer-czkwz                 1/1     Running   0          80m    192.168.56.12   k8s2   <none>           <none>
	 kube-system         log-gatherer-d28hn                 1/1     Running   0          80m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-6q5fk               1/1     Running   0          80m    192.168.56.12   k8s2   <none>           <none>
	 kube-system         registry-adder-jw2zl               1/1     Running   0          80m    192.168.56.13   k8s3   <none>           <none>
	 kube-system         registry-adder-znzfj               1/1     Running   0          80m    192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-dwg6w cilium-nz8b6]
cmd: kubectl exec -n kube-system cilium-dwg6w -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend                Service Type   Backend                   
	 1    10.96.0.1:443           ClusterIP      1 => 192.168.56.11:6443   
	 2    10.96.0.10:53           ClusterIP      1 => 10.0.0.247:53        
	 3    10.96.0.10:9153         ClusterIP      1 => 10.0.0.247:9153      
	 4    10.97.11.88:3000        ClusterIP                                
	 5    10.99.212.235:9090      ClusterIP      1 => 10.0.0.202:9090      
	 8    10.110.44.195:80        ClusterIP      1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 9    10.110.44.195:69        ClusterIP      1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 10   10.105.164.163:10080    ClusterIP      1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 11   10.105.164.163:10069    ClusterIP      1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 12   192.168.56.11:30483     NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 13   0.0.0.0:30483           NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 14   10.0.2.15:30483         NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 15   192.168.56.11:31107     NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 16   10.0.2.15:31107         NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 17   0.0.0.0:31107           NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 18   10.111.203.32:10080     ClusterIP      1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 19   10.111.203.32:10069     ClusterIP      1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 20   10.0.2.15:32444         NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 21   192.168.56.11:32444     NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 22   0.0.0.0:32444           NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 23   0.0.0.0:30708           NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 24   10.0.2.15:30708         NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 25   192.168.56.11:30708     NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 26   10.98.112.103:10069     ClusterIP      1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 27   10.98.112.103:10080     ClusterIP      1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 28   10.0.2.15:32057         NodePort       1 => 10.0.0.142:80        
	 29   10.0.2.15:32057/i       NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 30   192.168.56.11:32057     NodePort       1 => 10.0.0.142:80        
	 31   192.168.56.11:32057/i   NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 32   0.0.0.0:32057           NodePort       1 => 10.0.0.142:80        
	 33   0.0.0.0:32057/i         NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 34   10.0.2.15:30033         NodePort       1 => 10.0.0.142:69        
	 35   10.0.2.15:30033/i       NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 36   192.168.56.11:30033     NodePort       1 => 10.0.0.142:69        
	 37   192.168.56.11:30033/i   NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 38   0.0.0.0:30033           NodePort       1 => 10.0.0.142:69        
	 39   0.0.0.0:30033/i         NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 40   10.96.18.134:10069      ClusterIP      1 => 10.0.1.229:69        
	 41   10.96.18.134:10080      ClusterIP      1 => 10.0.1.229:80        
	 42   10.0.2.15:30816         NodePort                                 
	 43   10.0.2.15:30816/i       NodePort       1 => 10.0.1.229:80        
	 44   192.168.56.11:30816     NodePort                                 
	 45   192.168.56.11:30816/i   NodePort       1 => 10.0.1.229:80        
	 46   0.0.0.0:30816           NodePort                                 
	 47   0.0.0.0:30816/i         NodePort       1 => 10.0.1.229:80        
	 48   10.0.2.15:31902         NodePort                                 
	 49   10.0.2.15:31902/i       NodePort       1 => 10.0.1.229:69        
	 50   192.168.56.11:31902     NodePort                                 
	 51   192.168.56.11:31902/i   NodePort       1 => 10.0.1.229:69        
	 52   0.0.0.0:31902           NodePort                                 
	 53   0.0.0.0:31902/i         NodePort       1 => 10.0.1.229:69        
	 54   10.96.186.232:10069     ClusterIP      1 => 10.0.1.229:69        
	 55   10.96.186.232:10080     ClusterIP      1 => 10.0.1.229:80        
	 56   192.168.56.11:31673     NodePort       1 => 10.0.1.229:69        
	 57   10.0.2.15:31673         NodePort       1 => 10.0.1.229:69        
	 58   0.0.0.0:31673           NodePort       1 => 10.0.1.229:69        
	 59   10.0.2.15:31207         NodePort       1 => 10.0.1.229:80        
	 60   192.168.56.11:31207     NodePort       1 => 10.0.1.229:80        
	 61   0.0.0.0:31207           NodePort       1 => 10.0.1.229:80        
	 62   10.100.44.251:80        ClusterIP      1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 63   10.0.2.15:30712         NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 64   192.168.56.11:30712     NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 65   0.0.0.0:30712           NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 66   10.106.75.85:80         ClusterIP      1 => 10.0.1.229:80        
	 67   10.0.2.15:30118         NodePort                                 
	 68   10.0.2.15:30118/i       NodePort       1 => 10.0.1.229:80        
	 69   192.168.56.11:30118     NodePort                                 
	 70   192.168.56.11:30118/i   NodePort       1 => 10.0.1.229:80        
	 71   0.0.0.0:30118           NodePort                                 
	 72   0.0.0.0:30118/i         NodePort       1 => 10.0.1.229:80        
	 73   10.109.21.166:20080     ClusterIP      1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 74   10.109.21.166:20069     ClusterIP      1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 75   192.0.2.233:20080       ExternalIPs    1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 76   192.0.2.233:20069       ExternalIPs    1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 77   10.0.2.15:31645         NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 78   192.168.56.11:31645     NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 79   0.0.0.0:31645           NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 80   10.0.2.15:32100         NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 81   192.168.56.11:32100     NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 82   0.0.0.0:32100           NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-dwg6w -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                  IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                       
	 37         Disabled           Disabled          4          reserved:health                                                              fd02::9a   10.0.0.74    ready   
	 43         Enabled            Disabled          13989      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default       fd02::67   10.0.0.167   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                     
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                              
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                      
	                                                            k8s:zgroup=testDSClient                                                                                      
	 1459       Enabled            Disabled          9011       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default       fd02::8a   10.0.0.142   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                     
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                              
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                      
	                                                            k8s:zgroup=testDS                                                                                            
	 3451       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                           ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                    
	                                                            k8s:node-role.kubernetes.io/master                                                                           
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                  
	                                                            reserved:host                                                                                                
	 3848       Disabled           Disabled          12901      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system   fd02::d3   10.0.0.247   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                     
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                              
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                  
	                                                            k8s:k8s-app=kube-dns                                                                                         
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-nz8b6 -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend                Service Type   Backend                   
	 1    10.96.0.1:443           ClusterIP      1 => 192.168.56.11:6443   
	 2    10.96.0.10:53           ClusterIP      1 => 10.0.0.247:53        
	 3    10.96.0.10:9153         ClusterIP      1 => 10.0.0.247:9153      
	 4    10.99.212.235:9090      ClusterIP      1 => 10.0.0.202:9090      
	 5    10.97.11.88:3000        ClusterIP                                
	 8    10.110.44.195:80        ClusterIP      1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 9    10.110.44.195:69        ClusterIP      1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 10   10.105.164.163:10080    ClusterIP      1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 11   10.105.164.163:10069    ClusterIP      1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 12   0.0.0.0:30483           NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 13   10.0.2.15:30483         NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 14   192.168.56.12:30483     NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 15   10.0.2.15:31107         NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 16   192.168.56.12:31107     NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 17   0.0.0.0:31107           NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 18   10.111.203.32:10080     ClusterIP      1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 19   10.111.203.32:10069     ClusterIP      1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 20   192.168.56.12:32444     NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 21   10.0.2.15:32444         NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 22   0.0.0.0:32444           NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 23   10.0.2.15:30708         NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 24   192.168.56.12:30708     NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 25   0.0.0.0:30708           NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 26   10.98.112.103:10080     ClusterIP      1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 27   10.98.112.103:10069     ClusterIP      1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 28   192.168.56.12:32057     NodePort       1 => 10.0.1.140:80        
	 29   192.168.56.12:32057/i   NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 30   0.0.0.0:32057           NodePort       1 => 10.0.1.140:80        
	 31   0.0.0.0:32057/i         NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 32   10.0.2.15:32057         NodePort       1 => 10.0.1.140:80        
	 33   10.0.2.15:32057/i       NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 34   192.168.56.12:30033     NodePort       1 => 10.0.1.140:69        
	 35   192.168.56.12:30033/i   NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 36   10.0.2.15:30033         NodePort       1 => 10.0.1.140:69        
	 37   10.0.2.15:30033/i       NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 38   0.0.0.0:30033           NodePort       1 => 10.0.1.140:69        
	 39   0.0.0.0:30033/i         NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 40   10.96.18.134:10080      ClusterIP      1 => 10.0.1.229:80        
	 41   10.96.18.134:10069      ClusterIP      1 => 10.0.1.229:69        
	 42   10.0.2.15:30816         NodePort       1 => 10.0.1.229:80        
	 43   10.0.2.15:30816/i       NodePort       1 => 10.0.1.229:80        
	 44   192.168.56.12:30816     NodePort       1 => 10.0.1.229:80        
	 45   192.168.56.12:30816/i   NodePort       1 => 10.0.1.229:80        
	 46   0.0.0.0:30816           NodePort       1 => 10.0.1.229:80        
	 47   0.0.0.0:30816/i         NodePort       1 => 10.0.1.229:80        
	 48   192.168.56.12:31902     NodePort       1 => 10.0.1.229:69        
	 49   192.168.56.12:31902/i   NodePort       1 => 10.0.1.229:69        
	 50   0.0.0.0:31902           NodePort       1 => 10.0.1.229:69        
	 51   0.0.0.0:31902/i         NodePort       1 => 10.0.1.229:69        
	 52   10.0.2.15:31902         NodePort       1 => 10.0.1.229:69        
	 53   10.0.2.15:31902/i       NodePort       1 => 10.0.1.229:69        
	 54   10.96.186.232:10080     ClusterIP      1 => 10.0.1.229:80        
	 55   10.96.186.232:10069     ClusterIP      1 => 10.0.1.229:69        
	 56   192.168.56.12:31673     NodePort       1 => 10.0.1.229:69        
	 57   10.0.2.15:31673         NodePort       1 => 10.0.1.229:69        
	 58   0.0.0.0:31673           NodePort       1 => 10.0.1.229:69        
	 59   0.0.0.0:31207           NodePort       1 => 10.0.1.229:80        
	 60   10.0.2.15:31207         NodePort       1 => 10.0.1.229:80        
	 61   192.168.56.12:31207     NodePort       1 => 10.0.1.229:80        
	 62   10.100.44.251:80        ClusterIP      1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 63   10.0.2.15:30712         NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 64   192.168.56.12:30712     NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 65   0.0.0.0:30712           NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 66   10.106.75.85:80         ClusterIP      1 => 10.0.1.229:80        
	 67   10.0.2.15:30118         NodePort       1 => 10.0.1.229:80        
	 68   10.0.2.15:30118/i       NodePort       1 => 10.0.1.229:80        
	 69   192.168.56.12:30118     NodePort       1 => 10.0.1.229:80        
	 70   192.168.56.12:30118/i   NodePort       1 => 10.0.1.229:80        
	 71   0.0.0.0:30118           NodePort       1 => 10.0.1.229:80        
	 72   0.0.0.0:30118/i         NodePort       1 => 10.0.1.229:80        
	 73   10.109.21.166:20080     ClusterIP      1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 74   10.109.21.166:20069     ClusterIP      1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 75   192.0.2.233:20080       ExternalIPs    1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 76   192.0.2.233:20069       ExternalIPs    1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 77   10.0.2.15:31645         NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 78   192.168.56.12:31645     NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 79   0.0.0.0:31645           NodePort       1 => 10.0.0.142:80        
	                                             2 => 10.0.1.140:80        
	 80   10.0.2.15:32100         NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 81   192.168.56.12:32100     NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 82   0.0.0.0:32100           NodePort       1 => 10.0.0.142:69        
	                                             2 => 10.0.1.140:69        
	 83   10.0.2.15:8080          HostPort       1 => 10.0.1.229:80        
	 84   192.168.56.12:8080      HostPort       1 => 10.0.1.229:80        
	 85   0.0.0.0:8080            HostPort       1 => 10.0.1.229:80        
	 86   [fd04::12]:8080         HostPort       1 => [fd02::1da]:80       
	 87   [::]:8080               HostPort       1 => [fd02::1da]:80       
	 88   10.0.2.15:6969          HostPort       1 => 10.0.1.229:69        
	 89   192.168.56.12:6969      HostPort       1 => 10.0.1.229:69        
	 90   0.0.0.0:6969            HostPort       1 => 10.0.1.229:69        
	 91   [fd04::12]:6969         HostPort       1 => [fd02::1da]:69       
	 92   [::]:6969               HostPort       1 => [fd02::1da]:69       
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-nz8b6 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                              IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                    
	 1818       Enabled            Disabled          51835      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::1da   10.0.1.229   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:zgroup=test-k8s2                                                                                      
	 1904       Enabled            Disabled          9011       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::171   10.0.1.140   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:zgroup=testDS                                                                                         
	 2111       Enabled            Disabled          13989      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::12f   10.0.1.222   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                   
	 2598       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                        ready   
	                                                            reserved:host                                                                                             
	 3100       Disabled           Disabled          4          reserved:health                                                          fd02::16f   10.0.1.14    ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
11:17:41 STEP: Running AfterEach for block EntireTestsuite K8sPolicyTest Multi-node policy test
11:17:41 STEP: Cleaning up after the test
11:17:41 STEP: Running AfterEach for block EntireTestsuite K8sPolicyTest
11:17:42 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|7b5e3b7a_K8sPolicyTest_Multi-node_policy_test_validates_fromEntities_policies_Validates_fromEntities_all_policy.zip]]
11:17:45 STEP: Running AfterAll block for EntireTestsuite K8sPolicyTest Multi-node policy test validates fromEntities policies
11:17:45 STEP: Redeploying Cilium with default configuration
11:17:45 STEP: Installing Cilium
11:17:48 STEP: Waiting for Cilium to become ready
11:18:11 STEP: Validating Cilium Installation
11:18:11 STEP: Performing Cilium controllers preflight check
11:18:11 STEP: Performing Cilium status preflight check
11:18:11 STEP: Performing Cilium health check
11:18:13 STEP: Performing Cilium service preflight check
11:18:13 STEP: Performing K8s service preflight check
11:18:13 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-d9b2r': Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), clean-cilium-state (init)
	 Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
	 
	 command terminated with exit code 1
	 

11:18:13 STEP: Performing Cilium controllers preflight check
11:18:13 STEP: Performing Cilium health check
11:18:13 STEP: Performing Cilium status preflight check
11:18:15 STEP: Performing Cilium service preflight check
11:18:15 STEP: Performing K8s service preflight check
11:18:15 STEP: Performing Cilium status preflight check
11:18:15 STEP: Performing Cilium controllers preflight check
11:18:15 STEP: Performing Cilium health check
11:18:18 STEP: Performing Cilium service preflight check
11:18:18 STEP: Performing K8s service preflight check
11:18:18 STEP: Performing Cilium controllers preflight check
11:18:18 STEP: Performing Cilium status preflight check
11:18:18 STEP: Performing Cilium health check
11:18:20 STEP: Performing Cilium service preflight check
11:18:20 STEP: Performing K8s service preflight check
11:18:20 STEP: Performing Cilium controllers preflight check
11:18:20 STEP: Performing Cilium health check
11:18:20 STEP: Performing Cilium status preflight check
11:18:22 STEP: Performing Cilium service preflight check
11:18:22 STEP: Performing K8s service preflight check
11:18:22 STEP: Performing Cilium controllers preflight check
11:18:22 STEP: Performing Cilium health check
11:18:22 STEP: Performing Cilium status preflight check
11:18:25 STEP: Performing Cilium service preflight check
11:18:25 STEP: Performing K8s service preflight check
11:18:25 STEP: Performing Cilium controllers preflight check
11:18:25 STEP: Performing Cilium health check
11:18:25 STEP: Performing Cilium status preflight check
11:18:27 STEP: Performing Cilium service preflight check
11:18:27 STEP: Performing K8s service preflight check
11:18:27 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-d9b2r': Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), clean-cilium-state (init)
	 Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
	 
	 command terminated with exit code 1
	 

11:18:27 STEP: Performing Cilium controllers preflight check
11:18:27 STEP: Performing Cilium health check
11:18:27 STEP: Performing Cilium status preflight check
11:18:29 STEP: Performing Cilium service preflight check
11:18:29 STEP: Performing K8s service preflight check
11:18:29 STEP: Performing Cilium status preflight check
11:18:29 STEP: Performing Cilium controllers preflight check
11:18:29 STEP: Performing Cilium health check
11:18:31 STEP: Performing Cilium service preflight check
11:18:31 STEP: Performing K8s service preflight check
11:18:32 STEP: Waiting for cilium-operator to be ready
11:18:32 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
11:18:32 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>

Resources

Anything else?

This looks similar to #12511 but is not a duplicate.

The failing test has been quarantined in #18544, and can be moved out of quarantine when fixing this issue.

The issue started to appear after we upgraded the net-next Vagrant VM in #18496.
Initially detected by @ysksuzuki on Slack, who also did first investigation:

It seems the issue comes from the source port of the response not being the expected 80:

  • This is with kernel version 5.16 (VM rev 120):
0xffff97017af8c000       [<empty>]             ip_local_out    1719085423226 mark=0x0 ifindex=0 proto=0 mtu=0 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]           __ip_local_out    1719085464594 mark=0x0 ifindex=0 proto=0 mtu=0 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]                ip_output    1719085472779 mark=0x0 ifindex=0 proto=8 mtu=0 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]             nf_hook_slow    1719085482047 mark=0x0 ifindex=64 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]    apparmor_ip_postroute    1719085492586 mark=0x0 ifindex=64 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]         ip_finish_output    1719085501463 mark=0x0 ifindex=64 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>] __cgroup_bpf_run_filter_skb    1719085511732 mark=0x0 ifindex=64 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]       __ip_finish_output    1719085521821 mark=0x0 ifindex=64 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]        ip_finish_output2    1719085533143 mark=0x0 ifindex=64 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]           dev_queue_xmit    1719085544604 mark=0x0 ifindex=64 proto=8 mtu=1500 len=78 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]         __dev_queue_xmit    1719085555334 mark=0x0 ifindex=64 proto=8 mtu=1500 len=78 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]      netdev_core_pick_tx    1719085564061 mark=0x0 ifindex=64 proto=8 mtu=1500 len=78 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]       netif_skb_features    1719085577476 mark=0x0 ifindex=64 proto=8 mtu=1500 len=78 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]  passthru_features_check    1719085589519 mark=0x0 ifindex=64 proto=8 mtu=1500 len=78 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]     skb_network_protocol    1719085599227 mark=0x0 ifindex=64 proto=8 mtu=1500 len=78 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]  skb_csum_hwoffload_help    1719085611059 mark=0x0 ifindex=64 proto=8 mtu=1500 len=78 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]      dev_hard_start_xmit    1719085624525 mark=0x0 ifindex=64 proto=8 mtu=1500 len=78 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]                veth_xmit    1719085638982 mark=0x0 ifindex=64 proto=8 mtu=1500 len=78 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]   skb_clone_tx_timestamp    1719085652808 mark=0x0 ifindex=64 proto=8 mtu=1500 len=78 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]        __dev_forward_skb    1719085663488 mark=0x0 ifindex=64 proto=8 mtu=1500 len=78 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]       __dev_forward_skb2    1719085672555 mark=0x0 ifindex=64 proto=8 mtu=1500 len=78 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]         skb_scrub_packet    1719085681783 mark=0x0 ifindex=64 proto=8 mtu=1500 len=78 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]           eth_type_trans    1719085711809 mark=0x0 ifindex=64 proto=8 mtu=1500 len=78 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]                 netif_rx    1719085841203 mark=0x0 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]        netif_rx_internal    1719085853656 mark=0x0 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]       enqueue_to_backlog    1719085864336 mark=0x0 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]      __netif_receive_skb    1719085880687 mark=0x0 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>] __netif_receive_skb_one_core    1719085893170 mark=0x0 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]             tcf_classify    1719085904832 mark=0x0 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]      skb_ensure_writable    1719085923628 mark=0x0 ifindex=65 proto=8 mtu=1500 len=78 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]      skb_ensure_writable    1719085932935 mark=0x0 ifindex=65 proto=8 mtu=1500 len=78 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]      skb_ensure_writable    1719085981086 mark=0x0 ifindex=65 proto=8 mtu=1500 len=78 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]                   ip_rcv    1719085992097 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]             __sock_wfree    1719086005943 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]             nf_hook_slow    1719086015351 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]             ipt_do_table    1719086027002 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]        ipv4_conntrack_in    1719086039606 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]          nf_conntrack_in    1719086052721 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]  nf_conntrack_tcp_packet    1719086065405 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]              nf_checksum    1719086077838 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]           nf_ip_checksum    1719086088689 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]      iptable_mangle_hook    1719086100190 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]             ipt_do_table    1719086110830 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]   __inet_lookup_listener    1719086136639 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]       inet_lhash2_lookup    1719086146778 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]       inet_lhash2_lookup    1719086156646 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]  nf_nat_ipv4_pre_routing    1719086167156 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]           nf_nat_ipv4_fn    1719086176754 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]           nf_nat_inet_fn    1719086195710 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]            nf_nat_packet    1719086204847 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]            ip_rcv_finish    1719086214526 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]       tcp_v4_early_demux    1719086223633 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]     ip_route_input_noref    1719086269249 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]       ip_route_input_rcu    1719086280179 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]      ip_route_input_slow    1719086293594 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]      fib_validate_source    1719086309705 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]    __fib_validate_source    1719086319383 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]               ip_forward    1719086335243 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]             nf_hook_slow    1719086346234 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]      iptable_mangle_hook    1719086357956 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]             ipt_do_table    1719086370119 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]             ipt_do_table    1719086379095 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]        ip_forward_finish    1719086392490 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]                ip_output    1719086403391 mark=0xeff70f00 ifindex=65 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]             nf_hook_slow    1719086412679 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]    apparmor_ip_postroute    1719086421095 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]      iptable_mangle_hook    1719086439249 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]             ipt_do_table    1719086449418 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]          nf_nat_ipv4_out    1719086461450 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]           nf_nat_ipv4_fn    1719086476949 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]           nf_nat_inet_fn    1719086491617 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]            nf_nat_packet    1719086501876 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]         nf_nat_manip_pkt    1719086511334 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]    nf_nat_ipv4_manip_pkt    1719086523918 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]      skb_ensure_writable    1719086534989 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]        l4proto_manip_pkt    1719086556249 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]      skb_ensure_writable    1719086568382 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:80->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]           nf_csum_update    1719086578100 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:229->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>] inet_proto_csum_replace4    1719086590614 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:229->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>] inet_proto_csum_replace4    1719086618947 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:229->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]        nf_xfrm_me_harder    1719086628114 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:229->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]    __xfrm_decode_session    1719086639415 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:229->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]             ipv4_confirm    1719086649174 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:229->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]               nf_confirm    1719086662930 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:229->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]         ip_finish_output    1719086674612 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:229->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]       __ip_finish_output    1719086686103 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:229->192.168.56.13:57036(tcp)
0xffff97017af8c000       [<empty>]        ip_finish_output2    1719086696483 mark=0xeff70f00 ifindex=3 proto=8 mtu=1500 len=64 10.0.0.202:229->192.168.56.13:57036(tcp)
  • Compare to kernel 5.15 (VM rev 115) where the test is working:
0xffff969cc77e92e0         [nginx]             ip_local_out    1893685419984 mark=0x0 ifindex=0 proto=0 mtu=0 len=52 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]           __ip_local_out    1893685440572 mark=0x0 ifindex=0 proto=0 mtu=0 len=52 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]                ip_output    1893685470598 mark=0x0 ifindex=0 proto=8 mtu=0 len=52 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]             nf_hook_slow    1893685478903 mark=0x0 ifindex=62 proto=8 mtu=1500 len=52 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]    apparmor_ip_postroute    1893685484834 mark=0x0 ifindex=62 proto=8 mtu=1500 len=52 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]         ip_finish_output    1893685492549 mark=0x0 ifindex=62 proto=8 mtu=1500 len=52 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx] __cgroup_bpf_run_filter_skb    1893685697199 mark=0x0 ifindex=62 proto=8 mtu=1500 len=52 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]       __ip_finish_output    1893685703190 mark=0x0 ifindex=62 proto=8 mtu=1500 len=52 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]        ip_finish_output2    1893685709391 mark=0x0 ifindex=62 proto=8 mtu=1500 len=52 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]           dev_queue_xmit    1893685715052 mark=0x0 ifindex=62 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]         __dev_queue_xmit    1893685720312 mark=0x0 ifindex=62 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]      netdev_core_pick_tx    1893685725671 mark=0x0 ifindex=62 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]       netif_skb_features    1893685730180 mark=0x0 ifindex=62 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]  passthru_features_check    1893685736542 mark=0x0 ifindex=62 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]     skb_network_protocol    1893685745248 mark=0x0 ifindex=62 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]  skb_csum_hwoffload_help    1893685751259 mark=0x0 ifindex=62 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]      dev_hard_start_xmit    1893685757280 mark=0x0 ifindex=62 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]                veth_xmit    1893685762951 mark=0x0 ifindex=62 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]   skb_clone_tx_timestamp    1893685769733 mark=0x0 ifindex=62 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]        __dev_forward_skb    1893685775214 mark=0x0 ifindex=62 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]       __dev_forward_skb2    1893685780794 mark=0x0 ifindex=62 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]         skb_scrub_packet    1893685786455 mark=0x0 ifindex=62 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]           eth_type_trans    1893685791073 mark=0x0 ifindex=62 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]                 netif_rx    1893685797164 mark=0x0 ifindex=63 proto=8 mtu=1500 len=52 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]        netif_rx_internal    1893685808906 mark=0x0 ifindex=63 proto=8 mtu=1500 len=52 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]       enqueue_to_backlog    1893685826699 mark=0x0 ifindex=63 proto=8 mtu=1500 len=52 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]      __netif_receive_skb    1893685839503 mark=0x0 ifindex=63 proto=8 mtu=1500 len=52 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx] __netif_receive_skb_one_core    1893685851365 mark=0x0 ifindex=63 proto=8 mtu=1500 len=52 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]             tcf_classify    1893685858999 mark=0x0 ifindex=63 proto=8 mtu=1500 len=52 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]      skb_ensure_writable    1893685867725 mark=0x0 ifindex=63 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0         [nginx]      skb_ensure_writable    1893685873777 mark=0x0 ifindex=63 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e86e0       [<empty>]   skb_release_head_state    1893681938338 mark=0xaa7c0f00 ifindex=3 proto=8 mtu=1500 len=642 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e86e0       [<empty>]         skb_release_data    1893681955049 mark=0xaa7c0f00 ifindex=3 proto=8 mtu=1500 len=642 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e86e0       [<empty>]            skb_free_head    1893681964025 mark=0xaa7c0f00 ifindex=3 proto=8 mtu=1500 len=642 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e86e0       [<empty>]             kfree_skbmem    1893681976499 mark=0xaa7c0f00 ifindex=3 proto=8 mtu=1500 len=642 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0       [<empty>]         napi_consume_skb    1893686788666 mark=0xaa7c0f00 ifindex=3 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0       [<empty>]          skb_release_all    1893686794977 mark=0xaa7c0f00 ifindex=3 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0       [<empty>]   skb_release_head_state    1893686800077 mark=0xaa7c0f00 ifindex=3 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0       [<empty>]         skb_release_data    1893686805797 mark=0xaa7c0f00 ifindex=3 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0       [<empty>]            skb_free_head    1893686817379 mark=0xaa7c0f00 ifindex=3 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)
0xffff969cc77e92e0       [<empty>]             kfree_skbmem    1893686821316 mark=0xaa7c0f00 ifindex=3 proto=8 mtu=1500 len=66 10.0.0.55:80->192.168.56.13:46086(tcp)

Metadata

Metadata

Assignees

Labels

area/CIContinuous Integration testing issue or flakeci/flakeThis is a known failure that occurs in the tree. Please investigate me!

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions