Skip to content

CI: K8sPolicyTest Basic Test checks all kind of Kubernetes policies #16127

@pchaigno

Description

@pchaigno

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next/499/testReport/junit/Suite-k8s-1/16/K8sPolicyTest_Basic_Test_checks_all_kind_of_Kubernetes_policies/
9bd6607b_K8sPolicyTest_Basic_Test_checks_all_kind_of_Kubernetes_policies.zip

Stacktrace

/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:518
Cannot connect from "app2-5cc5d58844-2nvnh" to 'http://10.100.11.164//public'
Expected command: kubectl exec -n 202105121615k8spolicytestbasictestchecksallkindofkubernetespoli app2-5cc5d58844-2nvnh -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://10.100.11.164//public -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 22 
Err: exit status 22
Stdout:
 	 time-> DNS: '0.000018(10.100.11.164)', Connect: '0.000381',Transfer '0.000877', total '0.000895'
Stderr:
 	 command terminated with exit code 22
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/k8sT/Policies.go:352

Standard Output

Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
⚠️  Number of "level=warning" in logs: 9
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Unable to update ipcache map entry on pod add
[Unable to use runtime singleton for feature envoy.http.headermap.lazy_map_min_size
Cilium pods: [cilium-k6hdd cilium-r5kzm]
Netpols loaded: 
CiliumNetworkPolicies loaded: 202105121615k8spolicytestbasictestchecksallkindofkubernetespoli::l7-policy 
Endpoint Policy Enforcement:
Pod                          Ingress   Egress
app1-5798c5fb6b-npcg4                  
app2-5cc5d58844-2nvnh                  
app3-6c7856c5b5-kdf9z                  
grafana-7fd557d749-dqccq               
prometheus-d87f8f984-qb5cj             
coredns-5495c8f48d-knrg7               
app1-5798c5fb6b-2b2fs                  
Cilium agent 'cilium-k6hdd': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 37 Failed 0
Cilium agent 'cilium-r5kzm': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 38 Failed 0

Standard Error

Click to show.
16:14:47 STEP: Running BeforeAll block for EntireTestsuite K8sPolicyTest
16:14:47 STEP: Ensuring the namespace kube-system exists
16:14:47 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
16:14:47 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
16:14:47 STEP: Installing Cilium
16:14:49 STEP: Waiting for Cilium to become ready
16:15:29 STEP: Validating if Kubernetes DNS is deployed
16:15:29 STEP: Checking if deployment is ready
16:15:30 STEP: Checking if kube-dns service is plumbed correctly
16:15:30 STEP: Checking if DNS can resolve
16:15:30 STEP: Checking if pods have identity
16:15:31 STEP: Kubernetes DNS is up and operational
16:15:31 STEP: Validating Cilium Installation
16:15:31 STEP: Performing Cilium controllers preflight check
16:15:31 STEP: Performing Cilium health check
16:15:31 STEP: Performing Cilium status preflight check
16:15:33 STEP: Performing Cilium service preflight check
16:15:33 STEP: Performing K8s service preflight check
16:15:34 STEP: Waiting for cilium-operator to be ready
16:15:34 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
16:15:34 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
16:15:34 STEP: Running BeforeAll block for EntireTestsuite K8sPolicyTest Basic Test
16:15:34 STEP: Deleting namespace 202105121615k8spolicytestbasictestchecksallkindofkubernetespoli
16:15:35 STEP: Creating namespace 202105121615k8spolicytestbasictestchecksallkindofkubernetespoli
16:15:35 STEP: WaitforPods(namespace="202105121615k8spolicytestbasictestchecksallkindofkubernetespoli", filter="-l zgroup=testapp")
16:15:46 STEP: WaitforPods(namespace="202105121615k8spolicytestbasictestchecksallkindofkubernetespoli", filter="-l zgroup=testapp") => <nil>
16:15:46 STEP: Running BeforeEach block for EntireTestsuite K8sPolicyTest Basic Test
16:15:48 STEP: WaitforPods(namespace="202105121615k8spolicytestbasictestchecksallkindofkubernetespoli", filter="-l zgroup=testapp")
16:15:48 STEP: WaitforPods(namespace="202105121615k8spolicytestbasictestchecksallkindofkubernetespoli", filter="-l zgroup=testapp") => <nil>
16:15:48 STEP: Testing L3/L4 rules
16:16:02 STEP: Testing L3/L4 deny rules
16:16:17 STEP: Testing L7 Policy
FAIL: Cannot connect from "app2-5cc5d58844-2nvnh" to 'http://10.100.11.164//public'
Expected command: kubectl exec -n 202105121615k8spolicytestbasictestchecksallkindofkubernetespoli app2-5cc5d58844-2nvnh -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://10.100.11.164//public -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 22 
Err: exit status 22
Stdout:
 	 time-> DNS: '0.000018(10.100.11.164)', Connect: '0.000381',Transfer '0.000877', total '0.000895'
Stderr:
 	 command terminated with exit code 22
	 

=== Test Finished at 2021-05-12T16:16:20Z====
16:16:20 STEP: Running JustAfterEach block for EntireTestsuite K8sPolicyTest
===================== TEST FAILED =====================
16:16:20 STEP: Running AfterFailed block for EntireTestsuite K8sPolicyTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS    RESTARTS   AGE    IP              NODE   NOMINATED NODE   READINESS GATES
	 202105121615k8spolicytestbasictestchecksallkindofkubernetespoli   app1-5798c5fb6b-2b2fs              2/2     Running   0          48s    10.0.0.212      k8s1   <none>           <none>
	 202105121615k8spolicytestbasictestchecksallkindofkubernetespoli   app1-5798c5fb6b-npcg4              2/2     Running   0          48s    10.0.0.147      k8s1   <none>           <none>
	 202105121615k8spolicytestbasictestchecksallkindofkubernetespoli   app2-5cc5d58844-2nvnh              1/1     Running   0          48s    10.0.0.140      k8s1   <none>           <none>
	 202105121615k8spolicytestbasictestchecksallkindofkubernetespoli   app3-6c7856c5b5-kdf9z              1/1     Running   0          48s    10.0.0.173      k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-7fd557d749-dqccq           1/1     Running   0          105m   10.0.1.171      k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-d87f8f984-qb5cj         1/1     Running   0          105m   10.0.1.206      k8s2   <none>           <none>
	 kube-system                                                       cilium-k6hdd                       1/1     Running   0          94s    192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-5b9b7b57b7-2lw8k   1/1     Running   0          94s    192.168.36.13   k8s3   <none>           <none>
	 kube-system                                                       cilium-operator-5b9b7b57b7-7qxnr   1/1     Running   0          94s    192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-r5kzm                       1/1     Running   0          94s    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-5495c8f48d-knrg7           1/1     Running   0          16m    10.0.1.172      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running   0          108m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running   0          108m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running   0          108m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running   0          108m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-66btx                 1/1     Running   0          105m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-9rk4q                 1/1     Running   0          105m   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       log-gatherer-hws5z                 1/1     Running   0          105m   192.168.36.13   k8s3   <none>           <none>
	 kube-system                                                       registry-adder-b4hz4               1/1     Running   0          106m   192.168.36.13   k8s3   <none>           <none>
	 kube-system                                                       registry-adder-n9x8x               1/1     Running   0          106m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-t9tk9               1/1     Running   0          106m   192.168.36.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-k6hdd cilium-r5kzm]
cmd: kubectl exec -n kube-system cilium-k6hdd -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend              Service Type   Backend                   
	 1    10.104.191.139:9090   ClusterIP      1 => 10.0.1.206:9090      
	 2    10.96.0.1:443         ClusterIP      1 => 192.168.36.11:6443   
	 3    10.96.0.10:53         ClusterIP      1 => 10.0.1.172:53        
	 4    10.96.0.10:9153       ClusterIP      1 => 10.0.1.172:9153      
	 5    10.102.158.27:3000    ClusterIP      1 => 10.0.1.171:3000      
	 6    10.100.11.164:80      ClusterIP      1 => 10.0.0.147:80        
	                                           2 => 10.0.0.212:80        
	 7    10.100.11.164:69      ClusterIP      1 => 10.0.0.147:69        
	                                           2 => 10.0.0.212:69        
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-k6hdd -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                              IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                    
	 5          Disabled           Disabled          40190      k8s:app=prometheus                                       fd02::127   10.0.1.206   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                    
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                         
	 319        Disabled           Disabled          4          reserved:health                                          fd02::17a   10.0.1.141   ready   
	 495        Disabled           Disabled          37708      k8s:app=grafana                                          fd02::165   10.0.1.171   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                         
	 581        Disabled           Disabled          58729      k8s:io.cilium.k8s.policy.cluster=default                 fd02::199   10.0.1.172   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                           
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                               
	                                                            k8s:k8s-app=kube-dns                                                                      
	 1965       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                        ready   
	                                                            reserved:host                                                                             
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-r5kzm -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend              Service Type   Backend                   
	 1    10.96.0.1:443         ClusterIP      1 => 192.168.36.11:6443   
	 2    10.96.0.10:9153       ClusterIP      1 => 10.0.1.172:9153      
	 3    10.96.0.10:53         ClusterIP      1 => 10.0.1.172:53        
	 4    10.102.158.27:3000    ClusterIP      1 => 10.0.1.171:3000      
	 5    10.104.191.139:9090   ClusterIP      1 => 10.0.1.206:9090      
	 6    10.100.11.164:80      ClusterIP      1 => 10.0.0.147:80        
	                                           2 => 10.0.0.212:80        
	 7    10.100.11.164:69      ClusterIP      1 => 10.0.0.147:69        
	                                           2 => 10.0.0.212:69        
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-r5kzm -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                            
	 202        Enabled            Disabled          10239      k8s:id=app1                                                                                       fd02::7c   10.0.0.147   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app1-account                                                                              
	                                                            k8s:io.kubernetes.pod.namespace=202105121615k8spolicytestbasictestchecksallkindofkubernetespoli                                   
	                                                            k8s:zgroup=testapp                                                                                                                
	 745        Disabled           Disabled          16796      k8s:appSecond=true                                                                                fd02::bc   10.0.0.140   ready   
	                                                            k8s:id=app2                                                                                                                       
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app2-account                                                                              
	                                                            k8s:io.kubernetes.pod.namespace=202105121615k8spolicytestbasictestchecksallkindofkubernetespoli                                   
	                                                            k8s:zgroup=testapp                                                                                                                
	 1781       Disabled           Disabled          4          reserved:health                                                                                   fd02::b8   10.0.0.242   ready   
	 2330       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/master                                                                                                
	                                                            reserved:host                                                                                                                     
	 2467       Enabled            Disabled          10239      k8s:id=app1                                                                                       fd02::ac   10.0.0.212   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app1-account                                                                              
	                                                            k8s:io.kubernetes.pod.namespace=202105121615k8spolicytestbasictestchecksallkindofkubernetespoli                                   
	                                                            k8s:zgroup=testapp                                                                                                                
	 3629       Disabled           Disabled          43611      k8s:id=app3                                                                                       fd02::a6   10.0.0.173   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202105121615k8spolicytestbasictestchecksallkindofkubernetespoli                                   
	                                                            k8s:zgroup=testapp                                                                                                                
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
16:17:17 STEP: Running AfterEach for block EntireTestsuite K8sPolicyTest Basic Test
16:17:17 STEP: Running AfterEach for block EntireTestsuite K8sPolicyTest
16:17:17 STEP: Running AfterEach for block EntireTestsuite

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/CIContinuous Integration testing issue or flakeci/flakeThis is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions