Skip to content

CI: K8sPolicyTest: Enforces connectivity correctly when the same L3/L4 CNP is updated: cannot get the revision #12491

@pchaigno

Description

@pchaigno

https://jenkins.cilium.io/job/Cilium-PR-K8s-oldest-net-next/1161/testReport/Suite-k8s-1/11/K8sPolicyTest_Basic_Test_Validate_CNP_update_Enforces_connectivity_correctly_when_the_same_L3_L4_CNP_is_updated/
7855aab6_K8sPolicyTest_Basic_Test_Validate_CNP_update_Enforces_connectivity_correctly_when_the_same_L3-L4_CNP_is_updated.zip

Stacktrace

/home/jenkins/workspace/Cilium-PR-K8s-oldest-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:514
"/home/jenkins/workspace/Cilium-PR-K8s-oldest-net-next/src/github.com/cilium/cilium/test/k8sT/manifests/cnp-update-allow-all.yaml" Policy cannot be applied
Expected
    <*errors.errorString | 0xc00045fd90>: {
        s: "Cannot retrieve cilium pod cilium-7ht5c policy revision: cannot get the revision ",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-oldest-net-next/src/github.com/cilium/cilium/test/k8sT/Policies.go:827

Standard Output

Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 4
⚠️  Number of "level=warning" in logs: 11
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Mutation detector is enabled, this will result in memory leakage.
Unable to enqueue endpoint policy visibility event
Cilium pods: [cilium-7ht5c cilium-8tnd7]
Netpols loaded: 
CiliumNetworkPolicies loaded: 202007092104k8spolicytestbasictestchecksallkindofkubernetespoli::cnp-update 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
prometheus-556f8994ff-zvqhn             
coredns-687db6485c-nbnp2                
app1-78586dccff-c9vvh                   
app1-78586dccff-z4hpc                   
app2-64975875bc-n6sw6                   
app3-688574cd6d-lmdmq                   
grafana-5987b75b56-5vlrg                
Cilium agent 'cilium-8tnd7': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 23 Failed 0

Standard Error

21:16:34 STEP: Running BeforeEach block for EntireTestsuite K8sPolicyTest Basic Test
21:16:36 STEP: WaitforPods(namespace="202007092104k8spolicytestbasictestchecksallkindofkubernetespoli", filter="-l zgroup=testapp")
21:16:36 STEP: WaitforPods(namespace="202007092104k8spolicytestbasictestchecksallkindofkubernetespoli", filter="-l zgroup=testapp") => <nil>
21:16:36 STEP: Running BeforeAll block for EntireTestsuite K8sPolicyTest Basic Test Validate CNP update
21:16:36 STEP: Applying default allow policy
21:16:38 STEP: Applying l3-l4 policy
21:16:44 STEP: Applying no-specs policy
21:16:46 STEP: Applying l3-l4 policy with user-specified labels
21:16:52 STEP: Applying default allow policy (should remove policy with user labels)
FAIL: "/home/jenkins/workspace/Cilium-PR-K8s-oldest-net-next/src/github.com/cilium/cilium/test/k8sT/manifests/cnp-update-allow-all.yaml" Policy cannot be applied
Expected
    <*errors.errorString | 0xc00045fd90>: {
        s: "Cannot retrieve cilium pod cilium-7ht5c policy revision: cannot get the revision ",
    }
to be nil
=== Test Finished at 2020-07-09T21:16:53Z====
21:16:53 STEP: Running JustAfterEach block for EntireTestsuite K8sPolicyTest
===================== TEST FAILED =====================
21:16:53 STEP: Running AfterFailed block for EntireTestsuite K8sPolicyTest
21:16:53 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS    RESTARTS   AGE   IP              NODE   NOMINATED NODE
	 202007092104k8spolicytestbasictestchecksallkindofkubernetespoli   app1-78586dccff-c9vvh              2/2     Running   0          12m   10.0.1.241      k8s1   <none>
	 202007092104k8spolicytestbasictestchecksallkindofkubernetespoli   app1-78586dccff-z4hpc              2/2     Running   0          12m   10.0.1.75       k8s1   <none>
	 202007092104k8spolicytestbasictestchecksallkindofkubernetespoli   app2-64975875bc-n6sw6              1/1     Running   0          12m   10.0.1.118      k8s1   <none>
	 202007092104k8spolicytestbasictestchecksallkindofkubernetespoli   app3-688574cd6d-lmdmq              1/1     Running   0          12m   10.0.1.114      k8s1   <none>
	 cilium-monitoring                                                 grafana-5987b75b56-5vlrg           0/1     Running   0          27m   10.0.0.79       k8s2   <none>
	 cilium-monitoring                                                 prometheus-556f8994ff-zvqhn        1/1     Running   0          27m   10.0.0.147      k8s2   <none>
	 kube-system                                                       cilium-7ht5c                       0/1     Running   1          12m   192.168.36.11   k8s1   <none>
	 kube-system                                                       cilium-8tnd7                       1/1     Running   0          12m   192.168.36.12   k8s2   <none>
	 kube-system                                                       cilium-operator-786f45d8df-pzq4p   1/1     Running   0          12m   192.168.36.12   k8s2   <none>
	 kube-system                                                       coredns-687db6485c-nbnp2           1/1     Running   0          18m   10.0.0.189      k8s2   <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running   0          29m   192.168.36.11   k8s1   <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running   0          29m   192.168.36.11   k8s1   <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running   0          30m   192.168.36.11   k8s1   <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running   0          29m   192.168.36.11   k8s1   <none>
	 kube-system                                                       log-gatherer-b7dwr                 1/1     Running   0          27m   192.168.36.11   k8s1   <none>
	 kube-system                                                       log-gatherer-dvf4s                 1/1     Running   0          27m   192.168.36.12   k8s2   <none>
	 kube-system                                                       log-gatherer-q5tjv                 1/1     Running   0          27m   192.168.36.13   k8s3   <none>
	 kube-system                                                       registry-adder-5hgw4               1/1     Running   0          27m   192.168.36.11   k8s1   <none>
	 kube-system                                                       registry-adder-6r2gw               1/1     Running   0          27m   192.168.36.12   k8s2   <none>
	 kube-system                                                       registry-adder-lqlbj               1/1     Running   0          27m   192.168.36.13   k8s3   <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-7ht5c cilium-8tnd7]
21:16:56 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs --field-selector='spec.nodeName!=k8s3'")
21:16:56 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs --field-selector='spec.nodeName!=k8s3'") => <nil>
21:16:57 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs --field-selector='spec.nodeName!=k8s3'")
21:16:57 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs --field-selector='spec.nodeName!=k8s3'") => <nil>
21:16:57 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs --field-selector='spec.nodeName!=k8s3'")
21:16:57 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs --field-selector='spec.nodeName!=k8s3'") => <nil>
21:17:13 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
21:17:14 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
21:17:14 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
21:17:14 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
21:17:14 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
21:17:14 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
21:17:14 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
21:17:15 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
21:17:15 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
21:17:16 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
21:17:16 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
21:17:16 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
21:17:16 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
21:17:17 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
21:17:17 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
21:17:17 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
21:17:17 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
21:17:18 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
21:17:18 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
cmd: kubectl exec -n kube-system cilium-7ht5c -- cilium service list
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 Error: Cannot get services list: Get "http:///var/run/cilium/cilium.sock/v1/service": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory
	 Is the agent running?
	 command terminated with exit code 1
	 

cmd: kubectl exec -n kube-system cilium-7ht5c -- cilium endpoint list
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 Error: cannot get endpoint list: Get "http:///var/run/cilium/cilium.sock/v1/endpoint": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory
	 Is the agent running?
	 
	 command terminated with exit code 1
	 

cmd: kubectl exec -n kube-system cilium-8tnd7 -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend              Service Type   Backend                   
	 1    10.96.0.1:443         ClusterIP      1 => 192.168.36.11:6443   
	 2    10.96.0.10:53         ClusterIP      1 => 10.0.0.189:53        
	 3    10.110.163.198:3000   ClusterIP                                
	 4    10.104.54.15:9090     ClusterIP      1 => 10.0.0.147:9090      
	 5    10.104.243.179:80     ClusterIP      1 => 10.0.1.75:80         
	                                           2 => 10.0.1.241:80        
	 6    10.104.243.179:69     ClusterIP      1 => 10.0.1.75:69         
	                                           2 => 10.0.1.241:69        
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-8tnd7 -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                       IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                            
	 148        Disabled           Disabled          54529      k8s:io.cilium.k8s.policy.cluster=default          fd00::ed   10.0.0.189   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                   
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                       
	                                                            k8s:k8s-app=kube-dns                                                              
	 476        Disabled           Disabled          4          reserved:health                                   fd00::7a   10.0.0.9     ready   
	 555        Disabled           Disabled          1          reserved:host                                                             ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
21:17:25 STEP: Running AfterEach for block EntireTestsuite K8sPolicyTest Basic Test
21:17:25 STEP: Running AfterEach for block EntireTestsuite K8sPolicyTest
21:17:25 STEP: Running AfterEach for block EntireTestsuite

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/CIContinuous Integration testing issue or flakeci/flakeThis is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions