Suite-k8s-1.18.K8sPolicyTest Basic Test Traffic redirections to proxy Tests DNS proxy visibility without policy
https://jenkins.cilium.io/job/Cilium-PR-K8s-newest-kernel-4.9/296/testReport/junit/Suite-k8s-1/18/K8sPolicyTest_Basic_Test_Traffic_redirections_to_proxy_Tests_DNS_proxy_visibility_without_policy/
bc473822_K8sPolicyTest_Basic_Test_Traffic_redirections_to_proxy_Tests_DNS_proxy_visibility_without_policy.zip
Stacktrace
/home/jenkins/workspace/Cilium-PR-K8s-newest-kernel-4.9/k8s-1.18-gopath/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:471
"app2-88b7f8c4b-4gt2p" cannot curl "http://vagrant-cache.ci.cilium.io"
Expected command: kubectl exec -n 202005201942k8spolicytestbasictestchecksallkindofkubernetespoli app2-88b7f8c4b-4gt2p -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 8 http://vagrant-cache.ci.cilium.io -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 28
Stdout:
time-> DNS: '0.000000()', Connect: '0.000000',Transfer '0.000000', total '5.969835'
Stderr:
command terminated with exit code 28
/home/jenkins/workspace/Cilium-PR-K8s-newest-kernel-4.9/k8s-1.18-gopath/src/github.com/cilium/cilium/test/k8sT/Policies.go:968
Standard Output
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: [cilium-5lzxl cilium-8l8vv]
Netpols loaded:
CiliumNetworkPolicies loaded:
Endpoint Policy Enforcement:
Pod Ingress Egress
hubble-cli-s5gvc
hubble-relay-6f958fd7db-mq7pl
app1-5c467757c4-66s78
app1-5c467757c4-6zsxh
app2-88b7f8c4b-4gt2p
app3-76694d56c5-tmb74
coredns-7964865f77-lvxf2
hubble-cli-g6p44
Cilium agent 'cilium-5lzxl': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 35 Failed 0
Cilium agent 'cilium-8l8vv': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Standard Error
19:54:44 STEP: Running BeforeEach block for Basic Test
19:54:49 STEP: Starting monitor and generating traffic which should not redirect to proxy
FAIL: "app2-88b7f8c4b-4gt2p" cannot curl "http://vagrant-cache.ci.cilium.io"
Expected command: kubectl exec -n 202005201942k8spolicytestbasictestchecksallkindofkubernetespoli app2-88b7f8c4b-4gt2p -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 8 http://vagrant-cache.ci.cilium.io -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 28
Stdout:
time-> DNS: '0.000000()', Connect: '0.000000',Transfer '0.000000', total '5.969835'
Stderr:
command terminated with exit code 28
=== Test Finished at 2020-05-20T19:54:59Z====
19:54:59 STEP: Running JustAfterEach block for K8sPolicyTest
===================== TEST FAILED =====================
19:55:00 STEP: Running AfterFailed block for K8sPolicyTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
202005201942k8spolicytestbasictestchecksallkindofkubernetespoli app1-5c467757c4-66s78 2/2 Running 0 12m 10.0.0.214 k8s1 <none> <none>
202005201942k8spolicytestbasictestchecksallkindofkubernetespoli app1-5c467757c4-6zsxh 2/2 Running 0 12m 10.0.0.111 k8s1 <none> <none>
202005201942k8spolicytestbasictestchecksallkindofkubernetespoli app2-88b7f8c4b-4gt2p 1/1 Running 0 12m 10.0.0.89 k8s1 <none> <none>
202005201942k8spolicytestbasictestchecksallkindofkubernetespoli app3-76694d56c5-tmb74 1/1 Running 0 12m 10.0.0.62 k8s1 <none> <none>
kube-system cilium-5lzxl 1/1 Running 0 13m 192.168.36.11 k8s1 <none> <none>
kube-system cilium-8l8vv 1/1 Running 0 13m 192.168.36.12 k8s2 <none> <none>
kube-system cilium-operator-684dff6c84-bs9s2 1/1 Running 0 13m 192.168.36.12 k8s2 <none> <none>
kube-system coredns-7964865f77-lvxf2 1/1 Running 0 12m 10.0.1.211 k8s2 <none> <none>
kube-system etcd-k8s1 1/1 Running 0 60m 192.168.36.11 k8s1 <none> <none>
kube-system hubble-cli-g6p44 1/1 Running 2 15m 10.0.1.65 k8s2 <none> <none>
kube-system hubble-cli-s5gvc 1/1 Running 2 15m 10.0.0.46 k8s1 <none> <none>
kube-system hubble-relay-6f958fd7db-mq7pl 1/1 Running 0 15m 10.0.1.76 k8s2 <none> <none>
kube-system kube-apiserver-k8s1 1/1 Running 0 60m 192.168.36.11 k8s1 <none> <none>
kube-system kube-controller-manager-k8s1 1/1 Running 0 60m 192.168.36.11 k8s1 <none> <none>
kube-system kube-proxy-2nzkn 1/1 Running 0 58m 192.168.36.12 k8s2 <none> <none>
kube-system kube-proxy-5dhlc 1/1 Running 0 60m 192.168.36.11 k8s1 <none> <none>
kube-system kube-scheduler-k8s1 1/1 Running 0 60m 192.168.36.11 k8s1 <none> <none>
kube-system log-gatherer-8pm89 1/1 Running 0 58m 192.168.36.11 k8s1 <none> <none>
kube-system log-gatherer-vcjzk 1/1 Running 0 58m 192.168.36.12 k8s2 <none> <none>
kube-system registry-adder-hxzdm 1/1 Running 0 58m 192.168.36.12 k8s2 <none> <none>
kube-system registry-adder-tnr7t 1/1 Running 0 58m 192.168.36.11 k8s1 <none> <none>
Stderr:
Fetching command output from pods [cilium-5lzxl cilium-8l8vv]
cmd: kubectl exec -n kube-system cilium-5lzxl -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.96.0.1:443 ClusterIP 1 => 192.168.36.11:6443
2 10.96.0.10:53 ClusterIP 1 => 10.0.1.211:53
3 10.96.0.10:9153 ClusterIP 1 => 10.0.1.211:9153
4 10.96.106.233:80 ClusterIP 1 => 10.0.1.76:4245
5 10.98.131.11:80 ClusterIP 1 => 10.0.0.111:80
2 => 10.0.0.214:80
6 10.98.131.11:69 ClusterIP 1 => 10.0.0.111:69
2 => 10.0.0.214:69
Stderr:
cmd: kubectl exec -n kube-system cilium-5lzxl -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
15 Disabled Disabled 50390 k8s:io.cilium.k8s.policy.cluster=default fd00::bf 10.0.0.46 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=hubble-cli
52 Disabled Disabled 30701 k8s:id=app1 fd00::88 10.0.0.214 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app1-account
k8s:io.kubernetes.pod.namespace=202005201942k8spolicytestbasictestchecksallkindofkubernetespoli
k8s:zgroup=testapp
74 Disabled Disabled 25633 k8s:appSecond=true fd00::df 10.0.0.89 ready
k8s:id=app2
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app2-account
k8s:io.kubernetes.pod.namespace=202005201942k8spolicytestbasictestchecksallkindofkubernetespoli
k8s:zgroup=testapp
1463 Disabled Disabled 2737 k8s:id=app3 fd00::99 10.0.0.62 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=202005201942k8spolicytestbasictestchecksallkindofkubernetespoli
k8s:zgroup=testapp
1579 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready
k8s:node-role.kubernetes.io/master
reserved:host
1749 Disabled Disabled 4 reserved:health fd00::79 10.0.0.26 ready
2167 Disabled Disabled 30701 k8s:id=app1 fd00::f7 10.0.0.111 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app1-account
k8s:io.kubernetes.pod.namespace=202005201942k8spolicytestbasictestchecksallkindofkubernetespoli
k8s:zgroup=testapp
Stderr:
cmd: kubectl exec -n kube-system cilium-8l8vv -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.96.0.1:443 ClusterIP 1 => 192.168.36.11:6443
2 10.96.0.10:53 ClusterIP 1 => 10.0.1.211:53
3 10.96.0.10:9153 ClusterIP 1 => 10.0.1.211:9153
4 10.96.106.233:80 ClusterIP 1 => 10.0.1.76:4245
5 10.98.131.11:80 ClusterIP 1 => 10.0.0.111:80
2 => 10.0.0.214:80
6 10.98.131.11:69 ClusterIP 1 => 10.0.0.111:69
2 => 10.0.0.214:69
Stderr:
cmd: kubectl exec -n kube-system cilium-8l8vv -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
1393 Disabled Disabled 4 reserved:health fd00::1d3 10.0.1.60 ready
1833 Disabled Disabled 27120 k8s:io.cilium.k8s.policy.cluster=default fd00::171 10.0.1.211 ready
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
2566 Disabled Disabled 328 k8s:io.cilium.k8s.policy.cluster=default fd00::1dd 10.0.1.76 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=hubble-relay
2896 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s2 ready
reserved:host
3252 Disabled Disabled 50390 k8s:io.cilium.k8s.policy.cluster=default fd00::1ca 10.0.1.65 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=hubble-cli
Stderr:
===================== Exiting AfterFailed =====================
19:55:43 STEP: Running AfterEach for block Traffic redirections to proxy
19:55:44 STEP: Running AfterEach for block Basic Test
19:55:44 STEP: Running AfterEach for block K8sPolicyTest
19:55:44 STEP: Running AfterEach for block EntireTestsuite
Suite-k8s-1.18.K8sPolicyTest Basic Test Traffic redirections to proxy Tests DNS proxy visibility without policy
https://jenkins.cilium.io/job/Cilium-PR-K8s-newest-kernel-4.9/296/testReport/junit/Suite-k8s-1/18/K8sPolicyTest_Basic_Test_Traffic_redirections_to_proxy_Tests_DNS_proxy_visibility_without_policy/
bc473822_K8sPolicyTest_Basic_Test_Traffic_redirections_to_proxy_Tests_DNS_proxy_visibility_without_policy.zip
Stacktrace
Standard Output
Standard Error