CI failure
Hit in net-next builds on branch v1.7:
- Suite-k8s-1.11.K8sDatapathConfig Encapsulation Check connectivity with sockops and VXLAN encapsulation
- Suite-k8s-1.11.K8sDatapathConfig AutoDirectNodeRoutes Check connectivity with sockops and direct routing
https://jenkins.cilium.io/job/Cilium-PR-Ginkgo-Tests-Validated/19791/
test_results_Cilium-PR-Ginkgo-Tests-Validated_19791_BDD-Test-PR-K8s-1.11-net-next-kubeproxy-free.zip
Basically this is found in the logs after testing:
level=error msg="Failed to load bpf_sockops: signal: segmentation fault (core dumped): Error: bpftool built without PID iterator support\n" subsys=sockops
Stacktrace
/home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-Validated/k8s-1.11-gopath/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:357
Found a "segmentation fault" in Cilium Logs
/home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-Validated/k8s-1.11-gopath/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:546
Standard Output
⚠️ Found a "segmentation fault" in logs
⚠️ Found a "segmentation fault" in logs
⚠️ Found a "segmentation fault" in logs
⚠️ Found a "segmentation fault" in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 2
⚠️ Number of "level=warning" in logs: 16
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 5 errors/warnings:
Unable to release endpoint ID
Unable to restore endpoint, ignoring
Failed to load bpf_sockops: signal: segmentation fault (core dumped): Error: bpftool built without PID iterator support\n
Disabled '--sockops-enable' due to missing BPF kernel support
BPF system config check: NOT OK.
Cilium pods: [cilium-2tffq cilium-kk22b]
Netpols loaded:
CiliumNetworkPolicies loaded:
Endpoint Policy Enforcement:
Pod Ingress Egress
test-k8s2-8d47f6c76-9kgfw
test-k8s2-8d47f6c76-v4bg7
testclient-tcs8g
testds-hc9lm
testds-lnhnm
testds-bj4dd
testds-qm68r
coredns-687db6485c-h6cgp
testclient-99dbl
testclient-hxvxl
testclient-l8qgz
testclient-zmz5b
testds-hhr8s
app2-dc85b4585-8r7qz
app3-68fb594d47-ndf6d
test-k8s2-8d47f6c76-zpffn
testclient-5lbcs
testds-f257h
app1-798d4f944d-hz75k
Cilium agent 'cilium-2tffq': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 36 Failed 0
Cilium agent 'cilium-kk22b': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0
Standard Error
STEP: Installing Cilium
STEP: Installing DNS Deployment
STEP: Performing Cilium preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium controllers preflight check
STEP: Performing Cilium health check
STEP: Performing Cilium service preflight check
STEP: Performing K8s service preflight check
STEP: Waiting for cilium-operator to be ready
STEP: Waiting for kube-dns to be ready
STEP: Running kube-dns preflight check
STEP: Performing K8s service preflight check
STEP: Making sure all endpoints are in ready state
STEP: Checking that BPF tunnels are in place
STEP: Checking pod connectivity between nodes
STEP: Checking pod connectivity between nodes
=== Test Finished at 2020-07-01T23:43:54Z====
===================== TEST FAILED =====================
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
default test-k8s2-8d47f6c76-v4bg7 2/2 Running 0 1m 10.10.1.236 k8s2 <none>
default testclient-l8qgz 1/1 Running 0 1m 10.10.1.111 k8s2 <none>
default testclient-zmz5b 1/1 Running 0 1m 10.10.0.190 k8s1 <none>
default testds-bj4dd 2/2 Running 0 1m 10.10.0.78 k8s1 <none>
default testds-hc9lm 2/2 Running 0 1m 10.10.1.253 k8s2 <none>
external-ips-test app1-798d4f944d-hz75k 2/2 Running 0 24m 10.10.0.55 k8s1 <none>
external-ips-test app2-dc85b4585-8r7qz 2/2 Running 0 24m 10.10.0.6 k8s1 <none>
external-ips-test app3-68fb594d47-ndf6d 2/2 Running 0 24m 10.10.1.58 k8s2 <none>
external-ips-test host-client-4psrf 1/1 Running 0 24m 192.168.36.12 k8s2 <none>
external-ips-test host-client-5qzx6 1/1 Running 0 24m 192.168.36.11 k8s1 <none>
external-ips-test host-server-1-56c9467d4b-62jn4 2/2 Running 0 24m 192.168.36.11 k8s1 <none>
external-ips-test host-server-2-b8d89c58c-6wdld 2/2 Running 0 24m 192.168.36.11 k8s1 <none>
kube-system cilium-2tffq 1/1 Running 0 1m 192.168.36.11 k8s1 <none>
kube-system cilium-kk22b 1/1 Running 0 1m 192.168.36.12 k8s2 <none>
kube-system cilium-operator-56f99554b7-22qms 1/1 Running 0 44m 192.168.36.11 k8s1 <none>
kube-system coredns-687db6485c-h6cgp 1/1 Running 0 48m 10.10.0.248 k8s1 <none>
kube-system etcd-k8s1 1/1 Running 0 47m 192.168.36.11 k8s1 <none>
kube-system kube-apiserver-k8s1 1/1 Running 0 47m 192.168.36.11 k8s1 <none>
kube-system kube-controller-manager-k8s1 1/1 Running 0 48m 192.168.36.11 k8s1 <none>
kube-system kube-scheduler-k8s1 1/1 Running 0 47m 192.168.36.11 k8s1 <none>
kube-system log-gatherer-5zspv 1/1 Running 0 44m 192.168.36.13 k8s3 <none>
kube-system log-gatherer-hjkvn 1/1 Running 0 44m 192.168.36.11 k8s1 <none>
kube-system log-gatherer-mnwfb 1/1 Running 0 44m 192.168.36.12 k8s2 <none>
kube-system registry-adder-mhtlw 1/1 Running 0 46m 192.168.36.13 k8s3 <none>
kube-system registry-adder-trl5m 1/1 Running 0 46m 192.168.36.11 k8s1 <none>
kube-system registry-adder-xsvl8 1/1 Running 0 46m 192.168.36.12 k8s2 <none>
Stderr:
Fetching command output from pods [cilium-2tffq cilium-kk22b]
cmd: kubectl exec -n kube-system cilium-2tffq -- cilium bpf tunnel list
Exitcode: 0
Stdout:
TUNNEL VALUE
f00d::a0c:0:0:0:0 192.168.36.12:0
10.10.1.0:0 192.168.36.12:0
Stderr:
cmd: kubectl exec -n kube-system cilium-2tffq -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
176 Disabled Disabled 33250 k8s:id=app2 f00d::a0b:0:0:4390 10.10.0.6 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=external-ips-test
345 Disabled Disabled 63357 k8s:io.cilium.k8s.policy.cluster=default f00d::a0b:0:0:865c 10.10.0.78 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
1952 Disabled Disabled 42791 k8s:id=app1 f00d::a0b:0:0:b2c8 10.10.0.55 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=external-ips-test
2827 Disabled Disabled 4 reserved:health f00d::a0b:0:0:88aa 10.10.0.188 ready
3188 Disabled Disabled 41591 k8s:io.cilium.k8s.policy.cluster=default f00d::a0b:0:0:3800 10.10.0.190 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
3269 Disabled Disabled 57813 k8s:io.cilium.k8s.policy.cluster=default f00d::a0b:0:0:5078 10.10.0.248 ready
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
Stderr:
cmd: kubectl exec -n kube-system cilium-kk22b -- cilium bpf tunnel list
Exitcode: 0
Stdout:
TUNNEL VALUE
10.10.0.0:0 192.168.36.11:0
f00d::a0b:0:0:0:0 192.168.36.11:0
Stderr:
cmd: kubectl exec -n kube-system cilium-kk22b -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
528 Disabled Disabled 41591 k8s:io.cilium.k8s.policy.cluster=default f00d::a0c:0:0:451a 10.10.1.111 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
943 Disabled Disabled 4 reserved:health f00d::a0c:0:0:1948 10.10.1.80 ready
1022 Disabled Disabled 28872 k8s:io.cilium.k8s.policy.cluster=default f00d::a0c:0:0:bd6c 10.10.1.236 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=test-k8s2
1388 Disabled Disabled 33637 k8s:id=app3 f00d::a0c:0:0:959d 10.10.1.58 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=external-ips-test
1824 Disabled Disabled 63357 k8s:io.cilium.k8s.policy.cluster=default f00d::a0c:0:0:44e6 10.10.1.253 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
Stderr:
===================== Exiting AfterFailed =====================
CI failure
Hit in net-next builds on branch v1.7:
https://jenkins.cilium.io/job/Cilium-PR-Ginkgo-Tests-Validated/19791/
test_results_Cilium-PR-Ginkgo-Tests-Validated_19791_BDD-Test-PR-K8s-1.11-net-next-kubeproxy-free.zip
Basically this is found in the logs after testing:
Stacktrace
Standard Output
Standard Error