Suite-k8s-1.14.K8sServicesTest Checks service across nodes with L7 policy Tests NodePort with L7 Policy
https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE/808/testReport/Suite-k8s-1/14/K8sServicesTest_Checks_service_across_nodes_with_L7_policy_Tests_NodePort_with_L7_Policy/
f6921cba_K8sServicesTest_Checks_service_across_nodes_with_L7_policy_Tests_NodePort_with_L7_Policy.zip
Similar report on Slack #testing channel as well.
This seems very similar to #10118, but that one fails with TFTP on IPv6 rather than HTTP on IPv4. If we triage this issue and find it's the same, we can mark this duplicate and close. But for now it is worth tracking them separately. Please look closely at the initial stack trace below to see if this applies to your failure.
Stacktrace
/home/jenkins/workspace/Cilium-PR-K8s-GKE/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:431
Pod "testclient-gnw2p" can not connect to service "http://10.39.242.177:10080" (failed in request 9/10)
Expected command: kubectl exec -n default testclient-gnw2p -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 8 http://10.39.242.177:10080 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 28
Stdout:
time-> DNS: '0.000027()', Connect: '0.000000',Transfer '0.000000', total '5.000921'
Stderr:
command terminated with exit code 28
/home/jenkins/workspace/Cilium-PR-K8s-GKE/src/github.com/cilium/cilium/test/k8sT/Services.go:572
Standard Output
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
[[external/envoy/source/common/protobuf/utility.cc:441] Using deprecated option 'envoy.config.core.v3.Node.hidden_envoy_deprecated_build_version' from file base.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
Cilium pods: [cilium-4f5cd cilium-5thlk]
Netpols loaded:
CiliumNetworkPolicies loaded: default::l7-policy-demo
Endpoint Policy Enforcement:
Pod Ingress Egress
empire-outpost-8888-86bb5475b6-gwjss
pod-to-a-external-1111-7ff666fd8-n47nt
pod-to-b-multi-node-clusterip-666594b445-kf2t7
event-exporter-v0.2.5-7df89f4b8f-4qxmd
cilium-etcd-97jncxtgp9
empire-backup-69f6dffd4b-6qsb5
empire-hq-59d9f9fbc-jsttt
pod-to-a-allowed-cnp-55f885bf8b-jjmqt
pod-to-b-multi-node-nodeport-7cb9c6cb8b-fm52n
test-k8s2-664d69b864-s4tkb
fluentd-gcp-scaler-54ccb89d5-2vgqq
kube-dns-5877696fb4-f4hzk
cilium-etcd-58cgq2x75v
etcd-operator-797978964-xbqvg
netperf-server
pod-to-a-l3-denied-cnp-64c6c75c5d-wlczf
pod-to-b-intra-node-845f955cdc-sr6x6
test-k8s2-664d69b864-f677x
kube-dns-5877696fb4-jpw62
kube-dns-5877696fb4-s762v
l7-default-backend-8f479dd9-879zp
stackdriver-metadata-agent-cluster-level-7966cf9bff-t6wpd
cilium-etcd-n9jqwfrk8f
pod-to-a-59b5fcb7f6-phjrs
pod-to-b-intra-node-hostport-6549fc5b88-bvzqr
kube-dns-5877696fb4-mzjnb
testclient-dhqmq
kube-dns-5877696fb4-9dntr
kube-dns-autoscaler-8687c64fc-5vsld
hubble-cli-7678x
hubble-cli-tbxcx
app3-6cc4675d54-mxd4q
pod-to-b-intra-node-nodeport-5b6fd48f74-l9mhl
testds-525rf
echo-a-5995597649-v2vcx
echo-b-54c9bb5f5c-j6kmh
empire-outpost-9999-8664d77ff8-nfc6c
pod-to-b-multi-node-headless-746f84dff5-fjf7l
heapster-gke-6846b8ffb7-xbw2l
metrics-server-v0.3.1-5c6fbf777-rjfkq
app2-b58dc9b5c-9sql8
pod-to-b-multi-node-hostport-795964f8c8-5vr52
pod-to-external-fqdn-allow-google-cnp-b7b6bcdcb-5x9bv
testclient-gnw2p
testds-p6rvb
kube-dns-5877696fb4-qz7sj
Cilium agent 'cilium-4f5cd': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 42 Failed 0
Cilium agent 'cilium-5thlk': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 42 Failed 0
Standard Error
STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-GKE/src/github.com/cilium/cilium/test/k8sT/manifests/l7-policy-demo.yaml
STEP: Making 10 curl requests from "testclient-dhqmq" to "http://10.39.242.177:10080"
STEP: Making 10 curl requests from "testclient-gnw2p" to "http://10.39.242.177:10080"
=== Test Finished at 2020-04-29T12:07:52Z====
===================== TEST FAILED =====================
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cilium cilium-4f5cd 1/1 Running 0 16m 10.138.0.42 gke-cilium-ci-7-default-pool-b3b1ae7a-lsgr <none> <none>
cilium cilium-5thlk 1/1 Running 0 16m 10.138.0.45 gke-cilium-ci-7-default-pool-b3b1ae7a-qb6q <none> <none>
cilium cilium-node-init-kwkwt 1/1 Running 0 88m 10.138.0.45 gke-cilium-ci-7-default-pool-b3b1ae7a-qb6q <none> <none>
cilium cilium-node-init-q6smw 1/1 Running 0 88m 10.138.0.42 gke-cilium-ci-7-default-pool-b3b1ae7a-lsgr <none> <none>
cilium cilium-operator-8c95f8555-gvp7w 1/1 Running 0 16m 10.138.0.42 gke-cilium-ci-7-default-pool-b3b1ae7a-lsgr <none> <none>
cilium log-gatherer-r5mrw 1/1 Running 0 89m 10.138.0.45 gke-cilium-ci-7-default-pool-b3b1ae7a-qb6q <none> <none>
cilium log-gatherer-vvslb 1/1 Running 0 89m 10.138.0.42 gke-cilium-ci-7-default-pool-b3b1ae7a-lsgr <none> <none>
default test-k8s2-664d69b864-s4tkb 2/2 Running 0 8m45s 10.36.13.174 gke-cilium-ci-7-default-pool-b3b1ae7a-qb6q <none> <none>
default testclient-dhqmq 1/1 Running 0 8m45s 10.36.14.53 gke-cilium-ci-7-default-pool-b3b1ae7a-lsgr <none> <none>
default testclient-gnw2p 1/1 Running 0 8m45s 10.36.13.14 gke-cilium-ci-7-default-pool-b3b1ae7a-qb6q <none> <none>
default testds-525rf 1/2 Running 0 8m45s 10.36.13.212 gke-cilium-ci-7-default-pool-b3b1ae7a-qb6q <none> <none>
default testds-p6rvb 1/2 Running 0 8m45s 10.36.14.157 gke-cilium-ci-7-default-pool-b3b1ae7a-lsgr <none> <none>
kube-system event-exporter-v0.2.5-7df89f4b8f-4qxmd 2/2 Running 0 16m 10.36.13.170 gke-cilium-ci-7-default-pool-b3b1ae7a-qb6q <none> <none>
kube-system fluentd-gcp-scaler-54ccb89d5-2vgqq 1/1 Running 0 16m 10.36.14.29 gke-cilium-ci-7-default-pool-b3b1ae7a-lsgr <none> <none>
kube-system fluentd-gcp-v3.1.1-d8bld 2/2 Running 0 15m 10.138.0.45 gke-cilium-ci-7-default-pool-b3b1ae7a-qb6q <none> <none>
kube-system fluentd-gcp-v3.1.1-tjj8q 2/2 Running 0 15m 10.138.0.42 gke-cilium-ci-7-default-pool-b3b1ae7a-lsgr <none> <none>
kube-system heapster-gke-6846b8ffb7-xbw2l 3/3 Running 0 16m 10.36.14.36 gke-cilium-ci-7-default-pool-b3b1ae7a-lsgr <none> <none>
kube-system kube-dns-5877696fb4-9dntr 4/4 Running 0 16m 10.36.13.171 gke-cilium-ci-7-default-pool-b3b1ae7a-qb6q <none> <none>
kube-system kube-dns-5877696fb4-mzjnb 4/4 Running 0 16m 10.36.14.62 gke-cilium-ci-7-default-pool-b3b1ae7a-lsgr <none> <none>
kube-system kube-dns-autoscaler-8687c64fc-5vsld 1/1 Running 0 16m 10.36.14.228 gke-cilium-ci-7-default-pool-b3b1ae7a-lsgr <none> <none>
kube-system kube-proxy-gke-cilium-ci-7-default-pool-b3b1ae7a-lsgr 1/1 Running 0 16m 10.138.0.42 gke-cilium-ci-7-default-pool-b3b1ae7a-lsgr <none> <none>
kube-system kube-proxy-gke-cilium-ci-7-default-pool-b3b1ae7a-qb6q 1/1 Running 0 16m 10.138.0.45 gke-cilium-ci-7-default-pool-b3b1ae7a-qb6q <none> <none>
kube-system l7-default-backend-8f479dd9-879zp 1/1 Running 0 16m 10.36.13.92 gke-cilium-ci-7-default-pool-b3b1ae7a-qb6q <none> <none>
kube-system metrics-server-v0.3.1-5c6fbf777-rjfkq 2/2 Running 0 16m 10.36.14.78 gke-cilium-ci-7-default-pool-b3b1ae7a-lsgr <none> <none>
kube-system prometheus-to-sd-f4sfd 2/2 Running 0 15m 10.138.0.42 gke-cilium-ci-7-default-pool-b3b1ae7a-lsgr <none> <none>
kube-system prometheus-to-sd-kl6hb 2/2 Running 0 15m 10.138.0.45 gke-cilium-ci-7-default-pool-b3b1ae7a-qb6q <none> <none>
kube-system registry-adder-47qm7 1/1 Running 0 15m 10.138.0.45 gke-cilium-ci-7-default-pool-b3b1ae7a-qb6q <none> <none>
kube-system registry-adder-vxfjw 1/1 Running 0 15m 10.138.0.42 gke-cilium-ci-7-default-pool-b3b1ae7a-lsgr <none> <none>
kube-system stackdriver-metadata-agent-cluster-level-7966cf9bff-t6wpd 2/2 Running 1 16m 10.36.13.154 gke-cilium-ci-7-default-pool-b3b1ae7a-qb6q <none> <none>
Stderr:
Fetching command output from pods [cilium-4f5cd cilium-5thlk]
cmd: kubectl exec -n cilium cilium-4f5cd -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.39.240.1:443 ClusterIP 1 => 35.203.147.18:443
2 10.39.252.13:80 ClusterIP 1 => 10.36.13.92:8080
3 10.39.240.10:53 ClusterIP 1 => 10.36.13.171:53
2 => 10.36.14.62:53
4 10.39.248.2:443 ClusterIP 1 => 10.36.14.78:443
5 10.39.252.167:80 ClusterIP 1 => 10.36.14.36:8082
18 10.39.244.249:80 ClusterIP
19 10.39.244.249:69 ClusterIP
20 10.39.251.255:2379 ClusterIP
21 10.39.242.177:10080 ClusterIP
22 10.39.242.177:10069 ClusterIP
23 10.39.248.29:10069 ClusterIP
24 10.39.248.29:10080 ClusterIP
25 10.39.255.140:10080 ClusterIP
26 10.39.255.140:10069 ClusterIP
27 10.39.247.149:10080 ClusterIP 1 => 10.36.13.174:80
28 10.39.247.149:10069 ClusterIP 1 => 10.36.13.174:69
29 10.39.246.62:10080 ClusterIP 1 => 10.36.13.174:80
30 10.39.246.62:10069 ClusterIP 1 => 10.36.13.174:69
31 10.39.244.12:80 ClusterIP
32 10.39.249.228:80 ClusterIP 1 => 10.36.13.174:80
Stderr:
cmd: kubectl exec -n cilium cilium-4f5cd -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
240 Enabled Disabled 41997 k8s:io.cilium.k8s.policy.cluster=default 10.36.14.157 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
583 Disabled Disabled 14714 k8s:io.cilium.k8s.policy.cluster=default 10.36.14.29 ready
k8s:io.cilium.k8s.policy.serviceaccount=fluentd-gcp-scaler
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=fluentd-gcp-scaler
870 Disabled Disabled 16599 k8s:io.cilium.k8s.policy.cluster=default 10.36.14.53 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
986 Disabled Disabled 43521 k8s:io.cilium.k8s.policy.cluster=default 10.36.14.36 ready
k8s:io.cilium.k8s.policy.serviceaccount=heapster
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=heapster
k8s:version=v1.7.2
1258 Disabled Disabled 41465 k8s:io.cilium.k8s.policy.cluster=default 10.36.14.78 ready
k8s:io.cilium.k8s.policy.serviceaccount=metrics-server
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=metrics-server
k8s:version=v0.3.1
2074 Disabled Disabled 4 reserved:health 10.36.14.239 ready
2968 Disabled Disabled 26607 k8s:io.cilium.k8s.policy.cluster=default 10.36.14.228 ready
k8s:io.cilium.k8s.policy.serviceaccount=kube-dns-autoscaler
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns-autoscaler
3099 Disabled Disabled 12099 k8s:io.cilium.k8s.policy.cluster=default 10.36.14.62 ready
k8s:io.cilium.k8s.policy.serviceaccount=kube-dns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
Stderr:
cmd: kubectl exec -n cilium cilium-5thlk -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.39.240.10:53 ClusterIP 1 => 10.36.13.171:53
2 => 10.36.14.62:53
2 10.39.240.1:443 ClusterIP 1 => 35.203.147.18:443
3 10.39.252.167:80 ClusterIP 1 => 10.36.14.36:8082
4 10.39.252.13:80 ClusterIP 1 => 10.36.13.92:8080
5 10.39.248.2:443 ClusterIP 1 => 10.36.14.78:443
18 10.39.244.249:69 ClusterIP
19 10.39.244.249:80 ClusterIP
20 10.39.251.255:2379 ClusterIP
21 10.39.242.177:10080 ClusterIP
22 10.39.242.177:10069 ClusterIP
23 10.39.248.29:10080 ClusterIP
24 10.39.248.29:10069 ClusterIP
25 10.39.255.140:10080 ClusterIP
26 10.39.255.140:10069 ClusterIP
27 10.39.247.149:10080 ClusterIP 1 => 10.36.13.174:80
28 10.39.247.149:10069 ClusterIP 1 => 10.36.13.174:69
29 10.39.246.62:10069 ClusterIP 1 => 10.36.13.174:69
30 10.39.246.62:10080 ClusterIP 1 => 10.36.13.174:80
31 10.39.244.12:80 ClusterIP
32 10.39.249.228:80 ClusterIP 1 => 10.36.13.174:80
Stderr:
cmd: kubectl exec -n cilium cilium-5thlk -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
91 Disabled Disabled 4 reserved:health 10.36.13.250 ready
157 Disabled Disabled 25975 k8s:io.cilium.k8s.policy.cluster=default 10.36.13.170 ready
k8s:io.cilium.k8s.policy.serviceaccount=event-exporter-sa
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=event-exporter
k8s:version=v0.2.5
484 Disabled Disabled 22506 k8s:app=stackdriver-metadata-agent 10.36.13.154 ready
k8s:cluster-level=true
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=metadata-agent
k8s:io.kubernetes.pod.namespace=kube-system
782 Disabled Disabled 12099 k8s:io.cilium.k8s.policy.cluster=default 10.36.13.171 ready
k8s:io.cilium.k8s.policy.serviceaccount=kube-dns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
2146 Disabled Disabled 12186 k8s:io.cilium.k8s.policy.cluster=default 10.36.13.92 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=glbc
k8s:name=glbc
2214 Disabled Disabled 1450 k8s:io.cilium.k8s.policy.cluster=default 10.36.13.174 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=test-k8s2
3391 Enabled Disabled 41997 k8s:io.cilium.k8s.policy.cluster=default 10.36.13.212 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
3475 Disabled Disabled 16599 k8s:io.cilium.k8s.policy.cluster=default 10.36.13.14 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
Stderr:
===================== Exiting AfterFailed =====================
Suite-k8s-1.14.K8sServicesTest Checks service across nodes with L7 policy Tests NodePort with L7 Policy
https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE/808/testReport/Suite-k8s-1/14/K8sServicesTest_Checks_service_across_nodes_with_L7_policy_Tests_NodePort_with_L7_Policy/
f6921cba_K8sServicesTest_Checks_service_across_nodes_with_L7_policy_Tests_NodePort_with_L7_Policy.zip
Similar report on Slack #testing channel as well.
This seems very similar to #10118, but that one fails with TFTP on IPv6 rather than HTTP on IPv4. If we triage this issue and find it's the same, we can mark this duplicate and close. But for now it is worth tracking them separately. Please look closely at the initial stack trace below to see if this applies to your failure.
Stacktrace
Standard Output
Standard Error