08:42:11 STEP: Making 10 HTTP requests from k8s3 to "http://192.168.36.11:32489" (sessionAffinity)
FAIL: Cannot connect to service "http://192.168.36.11:32489" from k8s3 (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-7zj4z -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 8 http://192.168.36.11:32489 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" | grep 'Hostname:'
To succeed, but it failed:
Exitcode: 1
Stdout:
Stderr:
command terminated with exit code 28
=== Test Finished at 2020-05-26T08:42:16Z====
08:42:16 STEP: Running JustAfterEach block for EntireTestsuite K8sServicesTest
===================== TEST FAILED =====================
08:42:16 STEP: Running AfterFailed block for EntireTestsuite K8sServicesTest
08:42:16 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
08:42:16 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
08:42:17 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
08:42:17 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
cilium-monitoring grafana-55754c97-wpl6d 1/1 Running 0 14m 10.0.0.141 k8s2 <none>
cilium-monitoring prometheus-554b9b9fd9-lwg7j 1/1 Running 0 14m 10.0.0.167 k8s2 <none>
default test-k8s2-848b6f7864-88thn 2/2 Running 0 2m 10.0.0.78 k8s2 <none>
default testclient-hfxwb 1/1 Running 0 2m 10.0.1.48 k8s1 <none>
default testclient-zs6rw 1/1 Running 0 2m 10.0.0.71 k8s2 <none>
default testds-7dm7k 2/2 Running 0 2m 10.0.1.11 k8s1 <none>
default testds-9hxtf 2/2 Running 0 13s 10.0.0.72 k8s2 <none>
kube-system cilium-bj9jm 1/1 Running 0 1m 192.168.36.12 k8s2 <none>
kube-system cilium-jz6xg 1/1 Running 0 1m 192.168.36.11 k8s1 <none>
kube-system cilium-operator-75f5fc9cb7-rl5rf 1/1 Running 0 1m 192.168.36.11 k8s1 <none>
kube-system coredns-687db6485c-62cdr 1/1 Running 0 13m 10.0.0.142 k8s2 <none>
kube-system etcd-k8s1 1/1 Running 0 16m 192.168.36.11 k8s1 <none>
kube-system kube-apiserver-k8s1 1/1 Running 0 15m 192.168.36.11 k8s1 <none>
kube-system kube-controller-manager-k8s1 1/1 Running 0 16m 192.168.36.11 k8s1 <none>
kube-system kube-scheduler-k8s1 1/1 Running 0 16m 192.168.36.11 k8s1 <none>
kube-system log-gatherer-7zj4z 1/1 Running 0 14m 192.168.36.13 k8s3 <none>
kube-system log-gatherer-g8c87 1/1 Running 0 14m 192.168.36.11 k8s1 <none>
kube-system log-gatherer-kgcgx 1/1 Running 0 14m 192.168.36.12 k8s2 <none>
kube-system registry-adder-2mxvq 1/1 Running 0 14m 192.168.36.13 k8s3 <none>
kube-system registry-adder-hfvd4 1/1 Running 0 14m 192.168.36.11 k8s1 <none>
kube-system registry-adder-tmdnn 1/1 Running 0 14m 192.168.36.12 k8s2 <none>
Stderr:
Fetching command output from pods [cilium-bj9jm cilium-jz6xg]
08:42:18 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
08:42:18 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
08:42:18 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
08:42:18 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
08:42:19 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
08:42:19 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
08:42:19 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
08:42:19 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
08:42:19 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
08:42:20 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
08:42:20 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
08:42:20 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
08:42:20 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
08:42:20 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
08:42:33 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs --field-selector='spec.nodeName!=k8s3'")
08:42:33 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs --field-selector='spec.nodeName!=k8s3'") => <nil>
08:42:33 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs --field-selector='spec.nodeName!=k8s3'")
08:42:33 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs --field-selector='spec.nodeName!=k8s3'") => <nil>
08:42:34 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs --field-selector='spec.nodeName!=k8s3'")
08:42:34 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs --field-selector='spec.nodeName!=k8s3'") => <nil>
cmd: kubectl exec -n kube-system cilium-bj9jm -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.96.0.1:443 ClusterIP 1 => 192.168.36.11:6443
2 10.96.0.10:53 ClusterIP 1 => 10.0.0.142:53
3 10.110.150.164:9090 ClusterIP 1 => 10.0.0.167:9090
4 10.104.52.106:3000 ClusterIP 1 => 10.0.0.141:3000
9 10.107.17.165:80 ClusterIP 1 => 10.0.0.72:80
2 => 10.0.1.11:80
10 10.107.17.165:69 ClusterIP 1 => 10.0.0.72:69
2 => 10.0.1.11:69
11 10.99.80.77:10080 ClusterIP 1 => 10.0.0.72:80
2 => 10.0.1.11:80
12 10.99.80.77:10069 ClusterIP 1 => 10.0.0.72:69
2 => 10.0.1.11:69
13 10.0.2.15:30038 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
14 192.168.36.12:30038 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
15 0.0.0.0:30038 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
16 10.0.0.172:30038 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
17 192.168.36.12:31586 NodePort 1 => 10.0.0.72:69
2 => 10.0.1.11:69
18 10.0.2.15:31586 NodePort 1 => 10.0.0.72:69
2 => 10.0.1.11:69
19 0.0.0.0:31586 NodePort 1 => 10.0.0.72:69
2 => 10.0.1.11:69
20 10.0.0.172:31586 NodePort 1 => 10.0.0.72:69
2 => 10.0.1.11:69
21 10.103.94.179:10069 ClusterIP 1 => 10.0.0.72:69
2 => 10.0.1.11:69
22 10.103.94.179:10080 ClusterIP 1 => 10.0.0.72:80
2 => 10.0.1.11:80
23 0.0.0.0:32489 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
24 10.0.0.172:32489 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
25 192.168.36.12:32489 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
26 10.0.2.15:32489 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
27 192.168.36.12:30395 NodePort 1 => 10.0.0.72:69
2 => 10.0.1.11:69
28 0.0.0.0:30395 NodePort 1 => 10.0.0.72:69
2 => 10.0.1.11:69
29 10.0.0.172:30395 NodePort 1 => 10.0.0.72:69
2 => 10.0.1.11:69
30 10.0.2.15:30395 NodePort 1 => 10.0.0.72:69
2 => 10.0.1.11:69
31 10.102.5.13:10080 ClusterIP 1 => 10.0.0.72:80
2 => 10.0.1.11:80
32 10.102.5.13:10069 ClusterIP 1 => 10.0.0.72:69
2 => 10.0.1.11:69
33 10.0.2.15:30466 NodePort 1 => 10.0.0.72:80
34 0.0.0.0:30466 NodePort 1 => 10.0.0.72:80
35 10.0.0.172:30466 NodePort 1 => 10.0.0.72:80
36 192.168.36.12:30466 NodePort 1 => 10.0.0.72:80
37 192.168.36.12:31929 NodePort 1 => 10.0.0.72:69
38 10.0.2.15:31929 NodePort 1 => 10.0.0.72:69
39 0.0.0.0:31929 NodePort 1 => 10.0.0.72:69
40 10.0.0.172:31929 NodePort 1 => 10.0.0.72:69
41 10.109.46.143:10080 ClusterIP 1 => 10.0.0.78:80
42 10.109.46.143:10069 ClusterIP 1 => 10.0.0.78:69
43 192.168.36.12:30579 NodePort 1 => 10.0.0.78:80
44 10.0.2.15:30579 NodePort 1 => 10.0.0.78:80
45 0.0.0.0:30579 NodePort 1 => 10.0.0.78:80
46 10.0.0.172:30579 NodePort 1 => 10.0.0.78:80
47 10.0.2.15:31930 NodePort 1 => 10.0.0.78:69
48 0.0.0.0:31930 NodePort 1 => 10.0.0.78:69
49 10.0.0.172:31930 NodePort 1 => 10.0.0.78:69
50 192.168.36.12:31930 NodePort 1 => 10.0.0.78:69
51 10.100.159.238:10080 ClusterIP 1 => 10.0.0.78:80
52 10.100.159.238:10069 ClusterIP 1 => 10.0.0.78:69
53 192.168.36.12:31444 NodePort 1 => 10.0.0.78:80
54 10.0.2.15:31444 NodePort 1 => 10.0.0.78:80
55 0.0.0.0:31444 NodePort 1 => 10.0.0.78:80
56 10.0.0.172:31444 NodePort 1 => 10.0.0.78:80
57 10.0.0.172:31874 NodePort 1 => 10.0.0.78:69
58 192.168.36.12:31874 NodePort 1 => 10.0.0.78:69
59 10.0.2.15:31874 NodePort 1 => 10.0.0.78:69
60 0.0.0.0:31874 NodePort 1 => 10.0.0.78:69
61 10.99.89.138:80 ClusterIP 1 => 10.0.0.72:80
2 => 10.0.1.11:80
62 0.0.0.0:32225 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
63 10.0.0.172:32225 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
64 192.168.36.12:32225 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
65 10.0.2.15:32225 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
66 10.107.237.248:80 ClusterIP 1 => 10.0.0.78:80
67 10.0.0.172:32420 NodePort 1 => 10.0.0.78:80
68 192.168.36.12:32420 NodePort 1 => 10.0.0.78:80
69 10.0.2.15:32420 NodePort 1 => 10.0.0.78:80
70 0.0.0.0:32420 NodePort 1 => 10.0.0.78:80
71 192.168.36.12:8080 HostPort 1 => 10.0.0.78:80
72 192.168.36.12:6969 HostPort 1 => 10.0.0.78:69
Stderr:
cmd: kubectl exec -n kube-system cilium-bj9jm -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
188 Disabled Disabled 9692 k8s:io.cilium.k8s.policy.cluster=default fd00::5f 10.0.0.142 ready
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
389 Disabled Disabled 47908 k8s:io.cilium.k8s.policy.cluster=default fd00::6f 10.0.0.72 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
462 Disabled Disabled 4 reserved:health fd00::a7 10.0.0.113 ready
923 Disabled Disabled 1 reserved:host ready
2047 Disabled Disabled 12925 k8s:io.cilium.k8s.policy.cluster=default fd00::71 10.0.0.71 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
2392 Disabled Disabled 8505 k8s:io.cilium.k8s.policy.cluster=default fd00::9 10.0.0.78 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=test-k8s2
2456 Disabled Disabled 14705 k8s:app=prometheus fd00::e7 10.0.0.167 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s
k8s:io.kubernetes.pod.namespace=cilium-monitoring
2759 Disabled Disabled 22307 k8s:app=grafana fd00::88 10.0.0.141 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=cilium-monitoring
Stderr:
cmd: kubectl exec -n kube-system cilium-jz6xg -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.96.0.1:443 ClusterIP 1 => 192.168.36.11:6443
2 10.96.0.10:53 ClusterIP 1 => 10.0.0.142:53
3 10.110.150.164:9090 ClusterIP 1 => 10.0.0.167:9090
4 10.104.52.106:3000 ClusterIP 1 => 10.0.0.141:3000
9 10.107.17.165:80 ClusterIP 1 => 10.0.0.72:80
2 => 10.0.1.11:80
10 10.107.17.165:69 ClusterIP 1 => 10.0.0.72:69
2 => 10.0.1.11:69
11 10.99.80.77:10080 ClusterIP 1 => 10.0.0.72:80
2 => 10.0.1.11:80
12 10.99.80.77:10069 ClusterIP 1 => 10.0.0.72:69
2 => 10.0.1.11:69
13 10.0.2.15:30038 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
14 192.168.36.11:30038 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
15 0.0.0.0:30038 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
16 10.0.1.3:30038 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
17 192.168.36.11:31586 NodePort 1 => 10.0.0.72:69
2 => 10.0.1.11:69
18 10.0.2.15:31586 NodePort 1 => 10.0.0.72:69
2 => 10.0.1.11:69
19 0.0.0.0:31586 NodePort 1 => 10.0.0.72:69
2 => 10.0.1.11:69
20 10.0.1.3:31586 NodePort 1 => 10.0.0.72:69
2 => 10.0.1.11:69
21 10.103.94.179:10080 ClusterIP 1 => 10.0.0.72:80
2 => 10.0.1.11:80
22 10.103.94.179:10069 ClusterIP 1 => 10.0.0.72:69
2 => 10.0.1.11:69
23 192.168.36.11:32489 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
24 10.0.2.15:32489 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
25 0.0.0.0:32489 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
26 10.0.1.3:32489 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
27 192.168.36.11:30395 NodePort 1 => 10.0.0.72:69
2 => 10.0.1.11:69
28 10.0.2.15:30395 NodePort 1 => 10.0.0.72:69
2 => 10.0.1.11:69
29 0.0.0.0:30395 NodePort 1 => 10.0.0.72:69
2 => 10.0.1.11:69
30 10.0.1.3:30395 NodePort 1 => 10.0.0.72:69
2 => 10.0.1.11:69
31 10.102.5.13:10080 ClusterIP 1 => 10.0.0.72:80
2 => 10.0.1.11:80
32 10.102.5.13:10069 ClusterIP 1 => 10.0.0.72:69
2 => 10.0.1.11:69
33 192.168.36.11:30466 NodePort 1 => 10.0.1.11:80
34 10.0.2.15:30466 NodePort 1 => 10.0.1.11:80
35 0.0.0.0:30466 NodePort 1 => 10.0.1.11:80
36 10.0.1.3:30466 NodePort 1 => 10.0.1.11:80
37 192.168.36.11:31929 NodePort 1 => 10.0.1.11:69
38 10.0.2.15:31929 NodePort 1 => 10.0.1.11:69
39 0.0.0.0:31929 NodePort 1 => 10.0.1.11:69
40 10.0.1.3:31929 NodePort 1 => 10.0.1.11:69
41 10.109.46.143:10080 ClusterIP 1 => 10.0.0.78:80
42 10.109.46.143:10069 ClusterIP 1 => 10.0.0.78:69
43 10.0.1.3:31930 NodePort
44 192.168.36.11:31930 NodePort
45 10.0.2.15:31930 NodePort
46 0.0.0.0:31930 NodePort
47 0.0.0.0:30579 NodePort
48 10.0.1.3:30579 NodePort
49 192.168.36.11:30579 NodePort
50 10.0.2.15:30579 NodePort
51 10.100.159.238:10080 ClusterIP 1 => 10.0.0.78:80
52 10.100.159.238:10069 ClusterIP 1 => 10.0.0.78:69
53 192.168.36.11:31444 NodePort 1 => 10.0.0.78:80
54 10.0.2.15:31444 NodePort 1 => 10.0.0.78:80
55 0.0.0.0:31444 NodePort 1 => 10.0.0.78:80
56 10.0.1.3:31444 NodePort 1 => 10.0.0.78:80
57 10.0.1.3:31874 NodePort 1 => 10.0.0.78:69
58 192.168.36.11:31874 NodePort 1 => 10.0.0.78:69
59 10.0.2.15:31874 NodePort 1 => 10.0.0.78:69
60 0.0.0.0:31874 NodePort 1 => 10.0.0.78:69
61 10.99.89.138:80 ClusterIP 1 => 10.0.0.72:80
2 => 10.0.1.11:80
62 192.168.36.11:32225 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
63 10.0.2.15:32225 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
64 0.0.0.0:32225 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
65 10.0.1.3:32225 NodePort 1 => 10.0.0.72:80
2 => 10.0.1.11:80
66 10.107.237.248:80 ClusterIP 1 => 10.0.0.78:80
67 10.0.2.15:32420 NodePort
68 0.0.0.0:32420 NodePort
69 10.0.1.3:32420 NodePort
70 192.168.36.11:32420 NodePort
Stderr:
cmd: kubectl exec -n kube-system cilium-jz6xg -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
45 Disabled Disabled 12925 k8s:io.cilium.k8s.policy.cluster=default fd00::15c 10.0.1.48 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
752 Disabled Disabled 47908 k8s:io.cilium.k8s.policy.cluster=default fd00::16d 10.0.1.11 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
1690 Disabled Disabled 1 reserved:host ready
3001 Disabled Disabled 4 reserved:health fd00::19a 10.0.1.156 ready
Stderr:
https://jenkins.cilium.io/job/Cilium-PR-K8s-oldest-net-next/386/testReport/Suite-k8s-1/11/K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_vxlan_Tests_NodePort_with_sessionAffinity_from_outside/
According to the CI dashboard, it failed 9 times before, both in master and in PRs.
Stacktrace
Standard output
Standard error
Show standard error
2304e173_K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_vxlan_Tests_NodePort_with_sessionAffinity_from_outside.zip
/cc @brb