Test Name
K8sAgentPolicyTest Multi-node policy test with L7 policy using connectivity-check to check datapath
Failure Output
Stack Trace
/home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
connectivity-check pods are not ready after timeout
Expected
<*errors.errorString | 0xc001798de0>: {
s: "timed out waiting for pods with filter to be ready: 4m0s timeout expired",
}
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/k8s/net_policies.go:894
Standard Output
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 3 errors/warnings:
[cilium.bpf_metadata: map type mismatch on /sys/fs/bpf/tc/globals/cilium_ipcache: got 1, wanted 11
Policy map sync fixed errors, consider running with debug verbose = policy to get detailed dumps
Key allocation attempt failed
Cilium pods: [cilium-44rsb cilium-55qxs]
Netpols loaded:
CiliumNetworkPolicies loaded: default::echo-c default::pod-to-a-intra-node-proxy-egress-policy default::pod-to-a-multi-node-proxy-egress-policy default::pod-to-c-intra-node-proxy-to-proxy-policy default::pod-to-c-multi-node-proxy-to-proxy-policy
Endpoint Policy Enforcement:
Pod Ingress Egress
pod-to-a-intra-node-proxy-egress-policy-96b98dbb5-z44z7 false false
pod-to-c-intra-node-proxy-ingress-policy-99978465c-wvm4m false false
testclient-n292v false false
pod-to-c-multi-node-proxy-to-proxy-policy-9d9cbd695-97482 false false
echo-c-799d46bc9f-29mfq false false
testclient-crcnt false false
testds-k48h6 false false
coredns-8cfc78c54-hj947 false false
echo-a-77c976f59b-xbttt false false
echo-b-5f88747484-8xp9s false false
pod-to-a-multi-node-proxy-egress-policy-599fd76758-8hslb false false
pod-to-c-intra-node-proxy-to-proxy-policy-8656bf7f9b-828bc false false
pod-to-c-multi-node-proxy-ingress-policy-7dbccb8d4b-f6v2q false false
test-k8s2-5b756fd6c5-qvkdz false false
testds-qwl45 false false
Cilium agent 'cilium-44rsb': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 69 Failed 0
Cilium agent 'cilium-55qxs': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 47 Failed 0
Standard Error
12:16:55 STEP: Running BeforeAll block for EntireTestsuite K8sAgentPolicyTest Multi-node policy test with L7 policy
12:16:55 STEP: WaitforPods(namespace="default", filter="")
12:20:55 STEP: WaitforPods(namespace="default", filter="") => timed out waiting for pods with filter to be ready: 4m0s timeout expired
Resources
Anything else?
Seen on #22721
Test Name
K8sAgentPolicyTest Multi-node policy test with L7 policy using connectivity-check to check datapathFailure Output
Stack Trace
/home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515 connectivity-check pods are not ready after timeout Expected <*errors.errorString | 0xc001798de0>: { s: "timed out waiting for pods with filter to be ready: 4m0s timeout expired", } to be nil /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/k8s/net_policies.go:894Standard Output
Standard Error
Resources
Anything else?
Seen on #22721