I've seen this in multiple PRs so definitely a flake. Note this appear to be different from #13430.
Stacktrace
/home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:514
cilium pre-flight checks failed
Expected
<*errors.errorString | 0xc00030c240>: {
s: "Cilium validation failed: 4m0s timeout expired: Last polled error: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-28czp': Exitcode: 255 \nErr: exit status 255\nStdout:\n \t \nStderr:\n \t Error: Cannot get status/probe: Put \"http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe\": context deadline exceeded\n\t \n\t command terminated with exit code 255\n\t \n",
}
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/helpers/manifest.go:316
Standard Output
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
⚠️ Number of "level=warning" in logs: 7
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Mutation detector is enabled, this will result in memory leakage.
⚠️ Number of "context deadline exceeded" in logs: 24
⚠️ Number of "level=error" in logs: 17
⚠️ Number of "level=warning" in logs: 37
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 5 errors/warnings:
Mutation detector is enabled, this will result in memory leakage.
insertNeighbor failed
Unable to enqueue endpoint policy visibility event
Unable to enqueue endpoint policy bandwidth event
SessionAffinity feature requires BPF LRU maps. Disabling the feature.
Cilium pods: [cilium-28czp cilium-qwq4q]
Netpols loaded:
CiliumNetworkPolicies loaded:
Endpoint Policy Enforcement:
Pod Ingress Egress
grafana-d69c97b9b-fw4ck
prometheus-655fb888d7-ks8f8
coredns-867bf6789f-pw5f8
Cilium agent 'cilium-28czp': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 27 Failed 0
Cilium agent 'cilium-qwq4q': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 22 Failed 0
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9/752/testReport/junit/Suite-k8s-1/19/K8sDatapathConfig_Transparent_encryption_DirectRouting_Check_connectivity_with_transparent_encryption_and_direct_routing/
6adc05ad_K8sDatapathConfig_Transparent_encryption_DirectRouting_Check_connectivity_with_transparent_encryption_and_direct_routing.zip
https://datastudio.google.com/s/tKc_PRLPvDs
I've seen this in multiple PRs so definitely a flake. Note this appear to be different from #13430.
Stacktrace
Standard Output
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9/752/testReport/junit/Suite-k8s-1/19/K8sDatapathConfig_Transparent_encryption_DirectRouting_Check_connectivity_with_transparent_encryption_and_direct_routing/
6adc05ad_K8sDatapathConfig_Transparent_encryption_DirectRouting_Check_connectivity_with_transparent_encryption_and_direct_routing.zip
https://datastudio.google.com/s/tKc_PRLPvDs