K8sServicesTest Checks service across nodes Tests NodePort BPF Tests with vxlan Tests NodePort
This failure sees exit code 7. If you see a different exit code, this is not the right issue
Initially observed in PR #10819.
Affected tests:
- Suite-k8s-1.11.K8sServicesTest Checks service across nodes Tests NodePort BPF Tests with vxlan Tests NodePort
- Suite-k8s-1.11.K8sServicesTest Checks service across nodes Tests NodePort BPF Tests with direct routing Tests NodePort
- Suite-k8s-1.11.K8sServicesTest Checks service across nodes Tests NodePort BPF Tests with XDP, direct routing and SNAT
- Suite-k8s-1.11.K8sServicesTest Checks service across nodes Tests NodePort BPF Tests with XDP, direct routing and Hybrid
- Suite-k8s-1.11.K8sServicesTest Checks service across nodes Tests NodePort BPF Tests with XDP, direct routing and DSR
- Suite-k8s-1.11.K8sServicesTest Checks service across nodes Tests NodePort BPF Tests with TC, direct routing and SNAT
- Suite-k8s-1.11.K8sServicesTest Checks service across nodes Tests NodePort BPF Tests with TC, direct routing and Hybrid
- Suite-k8s-1.11.K8sServicesTest Checks service across nodes Tests NodePort BPF Tests with TC, direct routing and DSR
K8s-1.11 / net-next / kubeproxy-free build.
https://jenkins.cilium.io/job/Cilium-PR-Ginkgo-Tests-Validated/18543/testReport/junit/Suite-k8s-1/11/K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_vxlan_Tests_NodePort/
9789a12b_K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_vxlan_Tests_NodePort.zip
Stacktrace
/home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-Validated/k8s-1.11-gopath/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:430
Pod "testclient-k7ztk" can not connect to service "http://127.0.0.1:31743"
Expected command: kubectl exec -n default testclient-k7ztk -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 8 http://127.0.0.1:31743 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 7
Stdout:
time-> DNS: '0.000020()', Connect: '0.000000',Transfer '0.000000', total '0.000079'
Stderr:
command terminated with exit code 7
/home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-Validated/k8s-1.11-gopath/src/github.com/cilium/cilium/test/k8sT/Services.go:541
Standard Output
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
BPF system config check: NOT OK.
Unable to release endpoint ID
Cilium pods: [cilium-ckg6w cilium-kj495]
Netpols loaded:
CiliumNetworkPolicies loaded:
Endpoint Policy Enforcement:
Pod Ingress Egress
coredns-687db6485c-lpk2s
test-k8s2-848b6f7864-s95nv
testclient-k7ztk
testclient-ns7jm
testds-6fprm
testds-6qrzq
Cilium agent 'cilium-ckg6w': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-kj495': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 19 Failed 0
Standard Error
STEP: Installing Cilium
STEP: Installing DNS Deployment
STEP: Restarting DNS Pods
STEP: Performing Cilium preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium controllers preflight check
STEP: Performing Cilium health check
STEP: Performing Cilium service preflight check
STEP: Performing K8s service preflight check
STEP: Waiting for cilium-operator to be ready
STEP: Waiting for kube-dns to be ready
STEP: Running kube-dns preflight check
STEP: Performing K8s service preflight check
STEP: Making ten curl requests from "testclient-k7ztk" to "http://10.96.219.19:10080"
STEP: Making ten curl requests from "testclient-ns7jm" to "http://10.96.219.19:10080"
STEP: Making ten curl requests from "testclient-k7ztk" to "tftp://10.96.219.19:10069/hello"
STEP: Making ten curl requests from "testclient-ns7jm" to "tftp://10.96.219.19:10069/hello"
STEP: Making 10 curl requests from k8s1 to "http://127.0.0.1:31743"
STEP: Making 10 curl requests from k8s1 to "tftp://127.0.0.1:32341/hello"
STEP: Making 10 curl requests from k8s1 to "http://[::ffff:127.0.0.1]:31743"
STEP: Making 10 curl requests from k8s1 to "tftp://[::ffff:127.0.0.1]:32341/hello"
STEP: Making 10 curl requests from k8s1 to "http://192.168.36.11:31743"
STEP: Making 10 curl requests from k8s1 to "tftp://192.168.36.11:32341/hello"
STEP: Making 10 curl requests from k8s1 to "http://[::ffff:192.168.36.11]:31743"
STEP: Making 10 curl requests from k8s1 to "tftp://[::ffff:192.168.36.11]:32341/hello"
STEP: Making 10 curl requests from k8s1 to "http://192.168.36.12:31743"
STEP: Making 10 curl requests from k8s1 to "tftp://192.168.36.12:32341/hello"
STEP: Making 10 curl requests from k8s1 to "http://[::ffff:192.168.36.12]:31743"
STEP: Making 10 curl requests from k8s1 to "tftp://[::ffff:192.168.36.12]:32341/hello"
STEP: Making ten curl requests from "testclient-k7ztk" to "tftp://192.168.36.11:32341/hello"
STEP: Making ten curl requests from "testclient-ns7jm" to "tftp://192.168.36.11:32341/hello"
STEP: Making ten curl requests from "testclient-k7ztk" to "http://192.168.36.11:31743"
STEP: Making ten curl requests from "testclient-ns7jm" to "http://192.168.36.11:31743"
STEP: Making ten curl requests from "testclient-k7ztk" to "tftp://[::ffff:192.168.36.11]:32341/hello"
STEP: Making ten curl requests from "testclient-ns7jm" to "tftp://[::ffff:192.168.36.11]:32341/hello"
STEP: Making ten curl requests from "testclient-k7ztk" to "http://[::ffff:192.168.36.11]:31743"
STEP: Making ten curl requests from "testclient-ns7jm" to "http://[::ffff:192.168.36.11]:31743"
STEP: Making ten curl requests from "testclient-k7ztk" to "http://192.168.36.12:31743"
STEP: Making ten curl requests from "testclient-ns7jm" to "http://192.168.36.12:31743"
STEP: Making ten curl requests from "testclient-k7ztk" to "tftp://192.168.36.12:32341/hello"
STEP: Making ten curl requests from "testclient-ns7jm" to "tftp://192.168.36.12:32341/hello"
STEP: Making ten curl requests from "testclient-k7ztk" to "http://[::ffff:192.168.36.12]:31743"
STEP: Making ten curl requests from "testclient-ns7jm" to "http://[::ffff:192.168.36.12]:31743"
STEP: Making ten curl requests from "testclient-k7ztk" to "tftp://[::ffff:192.168.36.12]:32341/hello"
STEP: Making ten curl requests from "testclient-ns7jm" to "tftp://[::ffff:192.168.36.12]:32341/hello"
STEP: Making 10 curl requests from k8s1 to "http://10.10.0.164:31743"
STEP: Making 10 curl requests from k8s1 to "tftp://10.10.0.164:32341/hello"
STEP: Making 10 curl requests from k8s1 to "http://[::ffff:10.10.0.164]:31743"
STEP: Making 10 curl requests from k8s1 to "tftp://[::ffff:10.10.0.164]:32341/hello"
STEP: Making 10 curl requests from k8s1 to "http://10.10.1.88:31743"
STEP: Making 10 curl requests from k8s1 to "tftp://10.10.1.88:32341/hello"
STEP: Making 10 curl requests from k8s1 to "http://[::ffff:10.10.1.88]:31743"
STEP: Making 10 curl requests from k8s1 to "tftp://[::ffff:10.10.1.88]:32341/hello"
STEP: Making ten curl requests from "testclient-k7ztk" to "http://127.0.0.1:31743"
=== Test Finished at 2020-04-06T21:58:13Z====
===================== TEST FAILED =====================
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
default test-k8s2-848b6f7864-s95nv 2/2 Running 0 6m 10.10.1.134 k8s2 <none>
default testclient-k7ztk 1/1 Running 0 6m 10.10.0.66 k8s1 <none>
default testclient-ns7jm 1/1 Running 0 6m 10.10.1.101 k8s2 <none>
default testds-6fprm 2/2 Running 0 6m 10.10.0.217 k8s1 <none>
default testds-6qrzq 2/2 Running 0 6m 10.10.1.17 k8s2 <none>
kube-system cilium-ckg6w 1/1 Running 0 2m 192.168.36.12 k8s2 <none>
kube-system cilium-kj495 1/1 Running 0 2m 192.168.36.11 k8s1 <none>
kube-system cilium-operator-8677f5c767-stwg7 1/1 Running 0 2m 192.168.36.12 k8s2 <none>
kube-system coredns-687db6485c-lpk2s 1/1 Running 0 1m 10.10.1.229 k8s2 <none>
kube-system etcd-k8s1 1/1 Running 0 20m 192.168.36.11 k8s1 <none>
kube-system kube-apiserver-k8s1 1/1 Running 0 20m 192.168.36.11 k8s1 <none>
kube-system kube-controller-manager-k8s1 1/1 Running 0 20m 192.168.36.11 k8s1 <none>
kube-system kube-scheduler-k8s1 1/1 Running 0 20m 192.168.36.11 k8s1 <none>
kube-system log-gatherer-6cxmm 1/1 Running 0 18m 192.168.36.12 k8s2 <none>
kube-system log-gatherer-c4whj 1/1 Running 0 18m 192.168.36.13 k8s3 <none>
kube-system log-gatherer-vccs5 1/1 Running 0 18m 192.168.36.11 k8s1 <none>
kube-system registry-adder-4vbcs 1/1 Running 0 18m 192.168.36.13 k8s3 <none>
kube-system registry-adder-78drt 1/1 Running 0 18m 192.168.36.12 k8s2 <none>
kube-system registry-adder-l47nz 1/1 Running 0 18m 192.168.36.11 k8s1 <none>
Stderr:
Fetching command output from pods [cilium-ckg6w cilium-kj495]
cmd: kubectl exec -n kube-system cilium-ckg6w -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.96.0.10:53 ClusterIP 1 => 10.10.1.229:53
2 10.96.0.1:443 ClusterIP 1 => 192.168.36.11:6443
19 10.108.208.182:80 ClusterIP 1 => 10.10.0.217:80
2 => 10.10.1.17:80
20 10.108.208.182:69 ClusterIP 1 => 10.10.0.217:69
2 => 10.10.1.17:69
21 10.96.219.19:10069 ClusterIP 1 => 10.10.0.217:69
2 => 10.10.1.17:69
22 10.96.219.19:10080 ClusterIP 1 => 10.10.0.217:80
2 => 10.10.1.17:80
23 0.0.0.0:32341 NodePort 1 => 10.10.0.217:69
2 => 10.10.1.17:69
24 192.168.36.12:32341 NodePort 1 => 10.10.0.217:69
2 => 10.10.1.17:69
25 10.10.1.88:32341 NodePort 1 => 10.10.0.217:69
2 => 10.10.1.17:69
26 0.0.0.0:31743 NodePort 1 => 10.10.0.217:80
2 => 10.10.1.17:80
27 192.168.36.12:31743 NodePort 1 => 10.10.0.217:80
2 => 10.10.1.17:80
28 10.10.1.88:31743 NodePort 1 => 10.10.0.217:80
2 => 10.10.1.17:80
29 10.102.127.93:10069 ClusterIP 1 => 10.10.0.217:69
2 => 10.10.1.17:69
30 10.102.127.93:10080 ClusterIP 1 => 10.10.0.217:80
2 => 10.10.1.17:80
31 0.0.0.0:30157 NodePort 1 => 10.10.1.17:69
32 192.168.36.12:30157 NodePort 1 => 10.10.1.17:69
33 10.10.1.88:30157 NodePort 1 => 10.10.1.17:69
34 0.0.0.0:30380 NodePort 1 => 10.10.1.17:80
35 192.168.36.12:30380 NodePort 1 => 10.10.1.17:80
36 10.10.1.88:30380 NodePort 1 => 10.10.1.17:80
37 10.99.5.55:10080 ClusterIP 1 => 10.10.1.134:80
38 10.99.5.55:10069 ClusterIP 1 => 10.10.1.134:69
39 0.0.0.0:31801 NodePort 1 => 10.10.1.134:80
40 192.168.36.12:31801 NodePort 1 => 10.10.1.134:80
41 10.10.1.88:31801 NodePort 1 => 10.10.1.134:80
42 0.0.0.0:30169 NodePort 1 => 10.10.1.134:69
43 192.168.36.12:30169 NodePort 1 => 10.10.1.134:69
44 10.10.1.88:30169 NodePort 1 => 10.10.1.134:69
45 10.110.180.206:10080 ClusterIP 1 => 10.10.1.134:80
46 10.110.180.206:10069 ClusterIP 1 => 10.10.1.134:69
47 0.0.0.0:30014 NodePort 1 => 10.10.1.134:80
48 192.168.36.12:30014 NodePort 1 => 10.10.1.134:80
49 10.10.1.88:30014 NodePort 1 => 10.10.1.134:80
50 0.0.0.0:32385 NodePort 1 => 10.10.1.134:69
51 192.168.36.12:32385 NodePort 1 => 10.10.1.134:69
52 10.10.1.88:32385 NodePort 1 => 10.10.1.134:69
53 10.99.168.189:80 ClusterIP 1 => 10.10.0.217:80
2 => 10.10.1.17:80
54 0.0.0.0:31404 NodePort 1 => 10.10.0.217:80
2 => 10.10.1.17:80
55 192.168.36.12:31404 NodePort 1 => 10.10.0.217:80
2 => 10.10.1.17:80
56 10.10.1.88:31404 NodePort 1 => 10.10.0.217:80
2 => 10.10.1.17:80
57 10.109.116.214:80 ClusterIP 1 => 10.10.1.134:80
58 192.168.36.12:31904 NodePort 1 => 10.10.1.134:80
59 10.10.1.88:31904 NodePort 1 => 10.10.1.134:80
60 0.0.0.0:31904 NodePort 1 => 10.10.1.134:80
61 192.168.36.12:8080 HostPort 1 => 10.10.1.134:80
62 192.168.36.12:6969 HostPort 1 => 10.10.1.134:69
Stderr:
cmd: kubectl exec -n kube-system cilium-ckg6w -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
485 Disabled Disabled 41591 k8s:io.cilium.k8s.policy.cluster=default f00d::a0c:0:0:c858 10.10.1.229 ready
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
1114 Disabled Disabled 2122 k8s:io.cilium.k8s.policy.cluster=default f00d::a0c:0:0:8e78 10.10.1.101 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
1179 Disabled Disabled 4 reserved:health f00d::a0c:0:0:5b55 10.10.1.19 ready
1712 Disabled Disabled 19153 k8s:io.cilium.k8s.policy.cluster=default f00d::a0c:0:0:510a 10.10.1.17 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
1748 Disabled Disabled 36625 k8s:io.cilium.k8s.policy.cluster=default f00d::a0c:0:0:1bbf 10.10.1.134 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=test-k8s2
Stderr:
cmd: kubectl exec -n kube-system cilium-kj495 -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.96.0.1:443 ClusterIP 1 => 192.168.36.11:6443
2 10.96.0.10:53 ClusterIP 1 => 10.10.1.229:53
19 10.108.208.182:80 ClusterIP 1 => 10.10.0.217:80
2 => 10.10.1.17:80
20 10.108.208.182:69 ClusterIP 1 => 10.10.0.217:69
2 => 10.10.1.17:69
21 10.96.219.19:10080 ClusterIP 1 => 10.10.0.217:80
2 => 10.10.1.17:80
22 10.96.219.19:10069 ClusterIP 1 => 10.10.0.217:69
2 => 10.10.1.17:69
23 0.0.0.0:31743 NodePort 1 => 10.10.0.217:80
2 => 10.10.1.17:80
24 192.168.36.11:31743 NodePort 1 => 10.10.0.217:80
2 => 10.10.1.17:80
25 10.10.0.164:31743 NodePort 1 => 10.10.0.217:80
2 => 10.10.1.17:80
26 0.0.0.0:32341 NodePort 1 => 10.10.0.217:69
2 => 10.10.1.17:69
27 192.168.36.11:32341 NodePort 1 => 10.10.0.217:69
2 => 10.10.1.17:69
28 10.10.0.164:32341 NodePort 1 => 10.10.0.217:69
2 => 10.10.1.17:69
29 10.102.127.93:10080 ClusterIP 1 => 10.10.0.217:80
2 => 10.10.1.17:80
30 10.102.127.93:10069 ClusterIP 1 => 10.10.0.217:69
2 => 10.10.1.17:69
31 192.168.36.11:30380 NodePort 1 => 10.10.0.217:80
32 10.10.0.164:30380 NodePort 1 => 10.10.0.217:80
33 0.0.0.0:30380 NodePort 1 => 10.10.0.217:80
34 0.0.0.0:30157 NodePort 1 => 10.10.0.217:69
35 192.168.36.11:30157 NodePort 1 => 10.10.0.217:69
36 10.10.0.164:30157 NodePort 1 => 10.10.0.217:69
37 10.99.5.55:10080 ClusterIP 1 => 10.10.1.134:80
38 10.99.5.55:10069 ClusterIP 1 => 10.10.1.134:69
39 0.0.0.0:31801 NodePort
40 192.168.36.11:31801 NodePort
41 10.10.0.164:31801 NodePort
42 0.0.0.0:30169 NodePort
43 192.168.36.11:30169 NodePort
44 10.10.0.164:30169 NodePort
45 10.110.180.206:10080 ClusterIP 1 => 10.10.1.134:80
46 10.110.180.206:10069 ClusterIP 1 => 10.10.1.134:69
47 0.0.0.0:30014 NodePort 1 => 10.10.1.134:80
48 192.168.36.11:30014 NodePort 1 => 10.10.1.134:80
49 10.10.0.164:30014 NodePort 1 => 10.10.1.134:80
50 0.0.0.0:32385 NodePort 1 => 10.10.1.134:69
51 192.168.36.11:32385 NodePort 1 => 10.10.1.134:69
52 10.10.0.164:32385 NodePort 1 => 10.10.1.134:69
53 10.99.168.189:80 ClusterIP 1 => 10.10.0.217:80
2 => 10.10.1.17:80
54 0.0.0.0:31404 NodePort 1 => 10.10.0.217:80
2 => 10.10.1.17:80
55 192.168.36.11:31404 NodePort 1 => 10.10.0.217:80
2 => 10.10.1.17:80
56 10.10.0.164:31404 NodePort 1 => 10.10.0.217:80
2 => 10.10.1.17:80
57 10.109.116.214:80 ClusterIP 1 => 10.10.1.134:80
58 192.168.36.11:31904 NodePort
59 10.10.0.164:31904 NodePort
60 0.0.0.0:31904 NodePort
61 192.168.36.12:8080 HostPort 1 => 10.10.1.134:80
62 192.168.36.12:6969 HostPort 1 => 10.10.1.134:69
Stderr:
cmd: kubectl exec -n kube-system cilium-kj495 -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
2761 Disabled Disabled 19153 k8s:io.cilium.k8s.policy.cluster=default f00d::a0b:0:0:207a 10.10.0.217 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
3067 Disabled Disabled 2122 k8s:io.cilium.k8s.policy.cluster=default f00d::a0b:0:0:df30 10.10.0.66 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
3595 Disabled Disabled 4 reserved:health f00d::a0b:0:0:65e7 10.10.0.254 ready
Stderr:
===================== Exiting AfterFailed =====================
K8sServicesTest Checks service across nodes Tests NodePort BPF Tests with vxlan Tests NodePort
This failure sees exit code 7. If you see a different exit code, this is not the right issue
Initially observed in PR #10819.
Affected tests:
K8s-1.11 / net-next / kubeproxy-free build.
https://jenkins.cilium.io/job/Cilium-PR-Ginkgo-Tests-Validated/18543/testReport/junit/Suite-k8s-1/11/K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_vxlan_Tests_NodePort/
9789a12b_K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_vxlan_Tests_NodePort.zip
Stacktrace
Standard Output
Standard Error