/home/jenkins/workspace/cilium-v1.7-standard/k8s-1.17-gopath/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:430
cilium pre-flight checks failed
Expected
<*errors.errorString | 0xc001670470>: {
s: "CiliumPreFlightCheck error: Timeout reached: PreflightCheck failed: Last polled error: status is unhealthy: cilium-agent 'cilium-rhn6p' is unhealthy: Exitcode: 1 \nStdout:\n \t KVStore: Failure Err: not able to connect to any etcd endpoints\n\t Kubernetes: Ok 1.17 (v1.17.9) [linux/amd64]\n\t Kubernetes APIs: [\"CustomResourceDefinition\", \"cilium/v2::CiliumClusterwideNetworkPolicy\", \"cilium/v2::CiliumEndpoint\", \"cilium/v2::CiliumNetworkPolicy\", \"cilium/v2::CiliumNode\", \"core/v1::Namespace\", \"core/v1::Pods\", \"core/v1::Service\", \"discovery/v1beta1::EndpointSlice\", \"networking.k8s.io/v1::NetworkPolicy\"]\n\t KubeProxyReplacement: Probe []\n\t Cilium: Failure Kvstore service is not ready\n\t NodeMonitor: Disabled\n\t Cilium health daemon: Ok \n\t IPAM: IPv4: 4/255 allocated from 10.10.0.0/24, IPv6: 4/4294967295 allocated from f00d::a0a:0:0:0/96\n\t Controller Status: 26/26 healthy\n\t Proxy Status: OK, ip 10.10.0.80, 0 redirects active on ports 10000-20000\n\t \nStderr:\n \t command terminated with exit code 1\n\t \n",
}
to be nil
/home/jenkins/workspace/cilium-v1.7-standard/k8s-1.17-gopath/src/github.com/cilium/cilium/test/k8sT/assertionHelpers.go:134
STEP: Installing Cilium
STEP: Installing DNS Deployment
STEP: Performing Cilium preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium controllers preflight check
Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-w45br': controller kvstore-etcd-lock-session-renew is failing: Exitcode: 0
Stdout:
KVStore: Ok etcd: 1/1 connected, lease-ID=4c5473b484eda004, lock lease-ID=4c5473b484eda006, has-quorum=unable to acquire lock: caller context ended: context deadline exceeded, consecutive-errors=2: https://cilium-etcd-client.kube-system.svc:2379 - 3.3.12
Kubernetes: Ok 1.17 (v1.17.9) [linux/amd64]
Kubernetes APIs: ["CustomResourceDefinition", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Probe []
Cilium: Ok OK
NodeMonitor: Disabled
Cilium health daemon: Ok
IPAM: IPv4: 8/255 allocated from 10.10.1.0/24, IPv6: 8/4294967295 allocated from f00d::a0a:100:0:0/96
Controller Status: 56/57 healthy
Name Last success Last error Count Message
cilium-health-ep 42s ago never 0 no error
dns-garbage-collector-job 48s ago never 0 no error
endpoint-1655-regeneration-recovery never never 0 no error
endpoint-1871-regeneration-recovery never never 0 no error
endpoint-3970-regeneration-recovery never never 0 no error
endpoint-3991-regeneration-recovery never never 0 no error
endpoint-436-regeneration-recovery never never 0 no error
endpoint-502-regeneration-recovery never never 0 no error
endpoint-710-regeneration-recovery never never 0 no error
ipcache-bpf-garbage-collection 1m28s ago never 0 no error
k8s-heartbeat 18s ago never 0 no error
kvstore-etcd-lock-session-renew never 16s ago 4 unable to renew etcd lock session: context canceled
kvstore-etcd-session-renew never never 0 no error
kvstore-locks-gc 14s ago never 0 no error
kvstore-sync-store-cilium/state/nodes/v1 1m28s ago never 0 no error
mark-k8s-node-as-available 1m43s ago never 0 no error
metricsmap-bpf-prom-sync 8s ago never 0 no error
propagating local node change to kv-store 1m28s ago never 0 no error
resolve-identity-1655 1m34s ago never 0 no error
resolve-identity-1871 1m38s ago never 0 no error
resolve-identity-3970 1m42s ago never 0 no error
resolve-identity-3991 1m39s ago never 0 no error
resolve-identity-436 1m36s ago never 0 no error
resolve-identity-710 1m43s ago never 0 no error
restoring-ep-identity (502) 1m44s ago never 0 no error
sync-IPv4-identity-mapping (1655) 1m28s ago never 0 no error
sync-IPv4-identity-mapping (1871) 1m28s ago never 0 no error
sync-IPv4-identity-mapping (3970) 1m28s ago never 0 no error
sync-IPv4-identity-mapping (3991) 1m28s ago never 0 no error
sync-IPv4-identity-mapping (436) 1m28s ago never 0 no error
sync-IPv4-identity-mapping (502) 1m28s ago never 0 no error
sync-IPv4-identity-mapping (710) 1m28s ago never 0 no error
sync-IPv6-identity-mapping (1655) 1m28s ago never 0 no error
sync-IPv6-identity-mapping (1871) 1m28s ago never 0 no error
sync-IPv6-identity-mapping (3970) 1m28s ago never 0 no error
sync-IPv6-identity-mapping (3991) 1m28s ago never 0 no error
sync-IPv6-identity-mapping (436) 1m28s ago never 0 no error
sync-IPv6-identity-mapping (502) 1m28s ago never 0 no error
sync-IPv6-identity-mapping (710) 1m28s ago never 0 no error
sync-endpoints-and-host-ips 44s ago never 0 no error
sync-lb-maps-with-k8s-services 1m44s ago never 0 no error
sync-policymap-1655 33s ago never 0 no error
sync-policymap-1871 36s ago never 0 no error
sync-policymap-3970 40s ago never 0 no error
sync-policymap-3991 37s ago never 0 no error
sync-policymap-436 34s ago never 0 no error
sync-policymap-502 37s ago never 0 no error
sync-policymap-710 40s ago never 0 no error
sync-to-k8s-ciliumendpoint (1655) 4s ago never 0 no error
sync-to-k8s-ciliumendpoint (1871) 8s ago never 0 no error
sync-to-k8s-ciliumendpoint (3970) 12s ago never 0 no error
sync-to-k8s-ciliumendpoint (3991) 9s ago never 0 no error
sync-to-k8s-ciliumendpoint (436) 6s ago never 0 no error
sync-to-k8s-ciliumendpoint (502) 4s ago never 0 no error
sync-to-k8s-ciliumendpoint (710) 13s ago never 0 no error
template-dir-watcher never never 0 no error
update-k8s-node-annotations 1m46s ago never 0 no error
Proxy Status: OK, ip 10.10.1.1, 0 redirects active on ports 10000-20000
Cluster health: 1/2 reachable (2020-08-03T13:32:35Z)
Name IP Reachable Endpoints reachable
k8s1 192.168.36.11 false false
Stderr:
STEP: Performing Cilium status preflight check
STEP: Performing Cilium controllers preflight check
Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-w45br': controller kvstore-etcd-lock-session-renew is failing: Exitcode: 0
Stdout:
KVStore: Ok etcd: 1/1 connected, lease-ID=4c5473b484eda004, lock lease-ID=4c5473b484eda006, has-quorum=unable to acquire lock: caller context ended: context deadline exceeded, consecutive-errors=2: https://cilium-etcd-client.kube-system.svc:2379 - 3.3.12
Kubernetes: Ok 1.17 (v1.17.9) [linux/amd64]
Kubernetes APIs: ["CustomResourceDefinition", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Probe []
Cilium: Ok OK
NodeMonitor: Disabled
Cilium health daemon: Ok
IPAM: IPv4: 8/255 allocated from 10.10.1.0/24, IPv6: 8/4294967295 allocated from f00d::a0a:100:0:0/96
Controller Status: 56/57 healthy
Name Last success Last error Count Message
cilium-health-ep 47s ago never 0 no error
dns-garbage-collector-job 53s ago never 0 no error
endpoint-1655-regeneration-recovery never never 0 no error
endpoint-1871-regeneration-recovery never never 0 no error
endpoint-3970-regeneration-recovery never never 0 no error
endpoint-3991-regeneration-recovery never never 0 no error
endpoint-436-regeneration-recovery never never 0 no error
endpoint-502-regeneration-recovery never never 0 no error
endpoint-710-regeneration-recovery never never 0 no error
ipcache-bpf-garbage-collection 1m33s ago never 0 no error
k8s-heartbeat 23s ago never 0 no error
kvstore-etcd-lock-session-renew never 7s ago 5 unable to renew etcd lock session: context canceled
kvstore-etcd-session-renew never never 0 no error
kvstore-locks-gc 19s ago never 0 no error
kvstore-sync-store-cilium/state/nodes/v1 1m33s ago never 0 no error
mark-k8s-node-as-available 1m48s ago never 0 no error
metricsmap-bpf-prom-sync 8s ago never 0 no error
propagating local node change to kv-store 1m33s ago never 0 no error
resolve-identity-1655 1m39s ago never 0 no error
resolve-identity-1871 1m43s ago never 0 no error
resolve-identity-3970 1m47s ago never 0 no error
resolve-identity-3991 1m44s ago never 0 no error
resolve-identity-436 1m41s ago never 0 no error
resolve-identity-710 1m48s ago never 0 no error
restoring-ep-identity (502) 1m49s ago never 0 no error
sync-IPv4-identity-mapping (1655) 1m33s ago never 0 no error
sync-IPv4-identity-mapping (1871) 1m33s ago never 0 no error
sync-IPv4-identity-mapping (3970) 1m33s ago never 0 no error
sync-IPv4-identity-mapping (3991) 1m33s ago never 0 no error
sync-IPv4-identity-mapping (436) 1m33s ago never 0 no error
sync-IPv4-identity-mapping (502) 1m33s ago never 0 no error
sync-IPv4-identity-mapping (710) 1m33s ago never 0 no error
sync-IPv6-identity-mapping (1655) 1m33s ago never 0 no error
sync-IPv6-identity-mapping (1871) 1m33s ago never 0 no error
sync-IPv6-identity-mapping (3970) 1m33s ago never 0 no error
sync-IPv6-identity-mapping (3991) 1m33s ago never 0 no error
sync-IPv6-identity-mapping (436) 1m33s ago never 0 no error
sync-IPv6-identity-mapping (502) 1m33s ago never 0 no error
sync-IPv6-identity-mapping (710) 1m33s ago never 0 no error
sync-endpoints-and-host-ips 49s ago never 0 no error
sync-lb-maps-with-k8s-services 1m49s ago never 0 no error
sync-policymap-1655 38s ago never 0 no error
sync-policymap-1871 41s ago never 0 no error
sync-policymap-3970 45s ago never 0 no error
sync-policymap-3991 42s ago never 0 no error
sync-policymap-436 39s ago never 0 no error
sync-policymap-502 42s ago never 0 no error
sync-policymap-710 45s ago never 0 no error
sync-to-k8s-ciliumendpoint (1655) 9s ago never 0 no error
sync-to-k8s-ciliumendpoint (1871) 13s ago never 0 no error
sync-to-k8s-ciliumendpoint (3970) 7s ago never 0 no error
sync-to-k8s-ciliumendpoint (3991) 4s ago never 0 no error
sync-to-k8s-ciliumendpoint (436) 11s ago never 0 no error
sync-to-k8s-ciliumendpoint (502) 9s ago never 0 no error
sync-to-k8s-ciliumendpoint (710) 8s ago never 0 no error
template-dir-watcher never never 0 no error
update-k8s-node-annotations 1m51s ago never 0 no error
Proxy Status: OK, ip 10.10.1.1, 0 redirects active on ports 10000-20000
Cluster health: 1/2 reachable (2020-08-03T13:32:35Z)
Name IP Reachable Endpoints reachable
k8s1 192.168.36.11 false false
Stderr:
STEP: Performing Cilium status preflight check
Cilium is not ready yet: status is unhealthy: cilium-agent 'cilium-rhn6p' is unhealthy: Exitcode: 1
Stdout:
KVStore: Failure Err: not able to connect to any etcd endpoints
Kubernetes: Ok 1.17 (v1.17.9) [linux/amd64]
Kubernetes APIs: ["CustomResourceDefinition", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Probe []
Cilium: Failure Kvstore service is not ready
NodeMonitor: Disabled
Cilium health daemon: Ok
IPAM: IPv4: 4/255 allocated from 10.10.0.0/24, IPv6: 4/4294967295 allocated from f00d::a0a:0:0:0/96
Controller Status: 30/30 healthy
Proxy Status: OK, ip 10.10.0.80, 0 redirects active on ports 10000-20000
Stderr:
command terminated with exit code 1
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
Cilium is not ready yet: status is unhealthy: cilium-agent 'cilium-rhn6p' is unhealthy: Exitcode: 1
Stdout:
KVStore: Failure Err: not able to connect to any etcd endpoints
Kubernetes: Ok 1.17 (v1.17.9) [linux/amd64]
Kubernetes APIs: ["CustomResourceDefinition", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Probe []
Cilium: Failure Kvstore service is not ready
NodeMonitor: Disabled
Cilium health daemon: Ok
IPAM: IPv4: 4/255 allocated from 10.10.0.0/24, IPv6: 4/4294967295 allocated from f00d::a0a:0:0:0/96
Controller Status: 30/30 healthy
Proxy Status: OK, ip 10.10.0.80, 0 redirects active on ports 10000-20000
Stderr:
command terminated with exit code 1
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
Cilium is not ready yet: status is unhealthy: cilium-agent 'cilium-rhn6p' is unhealthy: Exitcode: 1
Stdout:
KVStore: Failure Err: quorum check failed 3 times in a row: timeout while waiting for initial connection
Kubernetes: Ok 1.17 (v1.17.9) [linux/amd64]
Kubernetes APIs: ["CustomResourceDefinition", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Probe []
Cilium: Failure Kvstore service is not ready
NodeMonitor: Disabled
Cilium health daemon: Ok
IPAM: IPv4: 4/255 allocated from 10.10.0.0/24, IPv6: 4/4294967295 allocated from f00d::a0a:0:0:0/96
Controller Status: 30/30 healthy
Proxy Status: OK, ip 10.10.0.80, 0 redirects active on ports 10000-20000
Stderr:
command terminated with exit code 1
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
Cilium is not ready yet: status is unhealthy: cilium-agent 'cilium-rhn6p' is unhealthy: Exitcode: 1
Stdout:
KVStore: Failure Err: quorum check failed 4 times in a row: timeout while waiting for initial connection
Kubernetes: Ok 1.17 (v1.17.9) [linux/amd64]
Kubernetes APIs: ["CustomResourceDefinition", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Probe []
Cilium: Failure Kvstore service is not ready
NodeMonitor: Disabled
Cilium health daemon: Ok
IPAM: IPv4: 4/255 allocated from 10.10.0.0/24, IPv6: 4/4294967295 allocated from f00d::a0a:0:0:0/96
Controller Status: 30/30 healthy
Proxy Status: OK, ip 10.10.0.80, 0 redirects active on ports 10000-20000
Stderr:
command terminated with exit code 1
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
Cilium is not ready yet: status is unhealthy: cilium-agent 'cilium-rhn6p' is unhealthy: Exitcode: 1
Stdout:
KVStore: Failure Err: quorum check failed 4 times in a row: timeout while waiting for initial connection
Kubernetes: Ok 1.17 (v1.17.9) [linux/amd64]
Kubernetes APIs: ["CustomResourceDefinition", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Probe []
Cilium: Failure Kvstore service is not ready
NodeMonitor: Disabled
Cilium health daemon: Ok
IPAM: IPv4: 4/255 allocated from 10.10.0.0/24, IPv6: 4/4294967295 allocated from f00d::a0a:0:0:0/96
Controller Status: 30/30 healthy
Proxy Status: OK, ip 10.10.0.80, 0 redirects active on ports 10000-20000
Stderr:
command terminated with exit code 1
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
Cilium is not ready yet: status is unhealthy: cilium-agent 'cilium-rhn6p' is unhealthy: Exitcode: 1
Stdout:
KVStore: Failure Err: quorum check failed 5 times in a row: timeout while waiting for initial connection
Kubernetes: Ok 1.17 (v1.17.9) [linux/amd64]
Kubernetes APIs: ["CustomResourceDefinition", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Probe []
Cilium: Failure Kvstore service is not ready
NodeMonitor: Disabled
Cilium health daemon: Ok
IPAM: IPv4: 4/255 allocated from 10.10.0.0/24, IPv6: 4/4294967295 allocated from f00d::a0a:0:0:0/96
Controller Status: 30/30 healthy
Proxy Status: OK, ip 10.10.0.80, 0 redirects active on ports 10000-20000
Stderr:
command terminated with exit code 1
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
Cilium is not ready yet: status is unhealthy: cilium-agent 'cilium-rhn6p' is unhealthy: Exitcode: 1
Stdout:
KVStore: Failure Err: quorum check failed 5 times in a row: timeout while waiting for initial connection
Kubernetes: Ok 1.17 (v1.17.9) [linux/amd64]
Kubernetes APIs: ["CustomResourceDefinition", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Probe []
Cilium: Failure Kvstore service is not ready
NodeMonitor: Disabled
Cilium health daemon: Ok
IPAM: IPv4: 4/255 allocated from 10.10.0.0/24, IPv6: 4/4294967295 allocated from f00d::a0a:0:0:0/96
Controller Status: 30/30 healthy
Proxy Status: OK, ip 10.10.0.80, 0 redirects active on ports 10000-20000
Stderr:
command terminated with exit code 1
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
Cilium is not ready yet: status is unhealthy: cilium-agent 'cilium-rhn6p' is unhealthy: Exitcode: 1
Stdout:
KVStore: Failure Err: quorum check failed 6 times in a row: timeout while waiting for initial connection
Kubernetes: Ok 1.17 (v1.17.9) [linux/amd64]
Kubernetes APIs: ["CustomResourceDefinition", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Probe []
Cilium: Failure Kvstore service is not ready
NodeMonitor: Disabled
Cilium health daemon: Ok
IPAM: IPv4: 4/255 allocated from 10.10.0.0/24, IPv6: 4/4294967295 allocated from f00d::a0a:0:0:0/96
Controller Status: 30/30 healthy
Proxy Status: OK, ip 10.10.0.80, 0 redirects active on ports 10000-20000
Stderr:
command terminated with exit code 1
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
Cilium is not ready yet: status is unhealthy: cilium-agent 'cilium-rhn6p' is unhealthy: Exitcode: 1
Stdout:
KVStore: Failure Err: quorum check failed 7 times in a row: timeout while waiting for initial connection
Kubernetes: Ok 1.17 (v1.17.9) [linux/amd64]
Kubernetes APIs: ["CustomResourceDefinition", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Probe []
Cilium: Failure Kvstore service is not ready
NodeMonitor: Disabled
Cilium health daemon: Ok
IPAM: IPv4: 4/255 allocated from 10.10.0.0/24, IPv6: 4/4294967295 allocated from f00d::a0a:0:0:0/96
Controller Status: 30/30 healthy
Proxy Status: OK, ip 10.10.0.80, 0 redirects active on ports 10000-20000
Stderr:
command terminated with exit code 1
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
Cilium is not ready yet: status is unhealthy: cilium-agent 'cilium-rhn6p' is unhealthy: Exitcode: 1
Stdout:
Stderr:
Get http:///var/run/cilium/cilium.sock/v1/healthz: dial unix /var/run/cilium/cilium.sock: connect: connection refused
Is the agent running?
command terminated with exit code 1
STEP: Performing Cilium status preflight check
Cilium is not ready yet: status is unhealthy: cilium-agent 'cilium-w45br' is unhealthy: Exitcode: 1
Stdout:
KVStore: Failure Err: quorum check failed 8 times in a row: 4m40.940082376s since last heartbeat update has been received
Kubernetes: Ok 1.17 (v1.17.9) [linux/amd64]
Kubernetes APIs: ["CustomResourceDefinition", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Probe []
Cilium: Failure Kvstore service is not ready
NodeMonitor: Disabled
Cilium health daemon: Ok
IPAM: IPv4: 6/255 allocated from 10.10.1.0/24, IPv6: 6/4294967295 allocated from f00d::a0a:100:0:0/96
Controller Status: 44/45 healthy
Name Last success Last error Count Message
kvstore-etcd-lock-session-renew never 12s ago 15 unable to renew etcd lock session: context canceled
Proxy Status: OK, ip 10.10.1.1, 0 redirects active on ports 10000-20000
Stderr:
command terminated with exit code 1
STEP: Performing Cilium status preflight check
Cilium is not ready yet: status is unhealthy: cilium-agent 'cilium-w45br' is unhealthy: Exitcode: 1
Stdout:
KVStore: Failure Err: quorum check failed 8 times in a row: 4m40.940082376s since last heartbeat update has been received
Kubernetes: Ok 1.17 (v1.17.9) [linux/amd64]
Kubernetes APIs: ["CustomResourceDefinition", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Probe []
Cilium: Failure Kvstore service is not ready
NodeMonitor: Disabled
Cilium health daemon: Ok
IPAM: IPv4: 6/255 allocated from 10.10.1.0/24, IPv6: 6/4294967295 allocated from f00d::a0a:100:0:0/96
Controller Status: 44/45 healthy
Name Last success Last error Count Message
kvstore-etcd-lock-session-renew never 17s ago 15 unable to renew etcd lock session: context canceled
Proxy Status: OK, ip 10.10.1.1, 0 redirects active on ports 10000-20000
Stderr:
command terminated with exit code 1
STEP: Performing Cilium status preflight check
Cilium is not ready yet: status is unhealthy: cilium-agent 'cilium-w45br' is unhealthy: Exitcode: 1
Stdout:
KVStore: Failure Err: quorum check failed 8 times in a row: 4m40.940082376s since last heartbeat update has been received
Kubernetes: Ok 1.17 (v1.17.9) [linux/amd64]
Kubernetes APIs: ["CustomResourceDefinition", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Probe []
Cilium: Failure Kvstore service is not ready
NodeMonitor: Disabled
Cilium health daemon: Ok
IPAM: IPv4: 7/255 allocated from 10.10.1.0/24, IPv6: 7/4294967295 allocated from f00d::a0a:100:0:0/96
Controller Status: 50/51 healthy
Name Last success Last error Count Message
kvstore-etcd-lock-session-renew never 22s ago 15 unable to renew etcd lock session: context canceled
Proxy Status: OK, ip 10.10.1.1, 0 redirects active on ports 10000-20000
Stderr:
command terminated with exit code 1
STEP: Performing Cilium status preflight check
Cilium is not ready yet: status is unhealthy: cilium-agent 'cilium-w45br' is unhealthy: Exitcode: 1
Stdout:
KVStore: Failure Err: quorum check failed 8 times in a row: 4m40.940082376s since last heartbeat update has been received
Kubernetes: Ok 1.17 (v1.17.9) [linux/amd64]
Kubernetes APIs: ["CustomResourceDefinition", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Probe []
Cilium: Failure Kvstore service is not ready
NodeMonitor: Disabled
Cilium health daemon: Ok
IPAM: IPv4: 7/255 allocated from 10.10.1.0/24, IPv6: 7/4294967295 allocated from f00d::a0a:100:0:0/96
Controller Status: 50/51 healthy
Name Last success Last error Count Message
kvstore-etcd-lock-session-renew never 27s ago 15 unable to renew etcd lock session: context canceled
Proxy Status: OK, ip 10.10.1.1, 0 redirects active on ports 10000-20000
Stderr:
command terminated with exit code 1
STEP: Performing Cilium status preflight check
Cilium is not ready yet: status is unhealthy: cilium-agent 'cilium-rhn6p' is unhealthy: Exitcode: 1
Stdout:
KVStore: Failure Err: not able to connect to any etcd endpoints
Kubernetes: Ok 1.17 (v1.17.9) [linux/amd64]
Kubernetes APIs: ["CustomResourceDefinition", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Probe []
Cilium: Failure Kvstore service is not ready
NodeMonitor: Disabled
Cilium health daemon: Ok
IPAM: IPv4: 4/255 allocated from 10.10.0.0/24, IPv6: 4/4294967295 allocated from f00d::a0a:0:0:0/96
Controller Status: 26/26 healthy
Proxy Status: OK, ip 10.10.0.80, 0 redirects active on ports 10000-20000
Stderr:
command terminated with exit code 1
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium status preflight check
=== Test Finished at 2020-08-03T13:36:46Z====
Cilium is not ready yet: status is unhealthy: cilium-agent 'cilium-rhn6p' is unhealthy: Exitcode: 1
Stdout:
KVStore: Failure Err: not able to connect to any etcd endpoints
Kubernetes: Ok 1.17 (v1.17.9) [linux/amd64]
Kubernetes APIs: ["CustomResourceDefinition", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Probe []
Cilium: Failure Kvstore service is not ready
NodeMonitor: Disabled
Cilium health daemon: Ok
IPAM: IPv4: 5/255 allocated from 10.10.0.0/24, IPv6: 5/4294967295 allocated from f00d::a0a:0:0:0/96
Controller Status: 32/32 healthy
Proxy Status: OK, ip 10.10.0.80, 0 redirects active on ports 10000-20000
Stderr:
command terminated with exit code 1
===================== TEST FAILED =====================
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default test-k8s2-596d68cd59-wrvcl 2/2 Running 0 5m53s 10.10.1.247 k8s2 <none> <none>
default testclient-78btk 1/1 Running 0 5m53s 10.10.1.7 k8s2 <none> <none>
default testclient-rz95h 1/1 Running 0 5m53s 10.10.0.245 k8s1 <none> <none>
default testds-tcf8b 2/2 Running 0 5m53s 10.10.1.197 k8s2 <none> <none>
default testds-tvngr 2/2 Running 0 5m53s 10.10.0.182 k8s1 <none> <none>
kube-system cilium-etcd-5sfm6hmskq 0/1 Error 0 7s 10.10.0.213 k8s1 <none> <none>
kube-system cilium-etcd-operator-7864c6c4cf-g6fjx 1/1 Running 0 5m52s 192.168.36.12 k8s2 <none> <none>
kube-system cilium-etcd-p87vgm7zqh 1/1 Running 0 15s 10.10.1.75 k8s2 <none> <none>
kube-system cilium-operator-5cd474578f-mhssx 1/1 Running 2 5m52s 192.168.36.12 k8s2 <none> <none>
kube-system cilium-rhn6p 1/1 Running 1 5m52s 192.168.36.11 k8s1 <none> <none>
kube-system cilium-w45br 0/1 Running 0 5m52s 192.168.36.12 k8s2 <none> <none>
kube-system coredns-767d4c6dd7-mspl7 1/1 Running 0 16m 10.10.1.203 k8s2 <none> <none>
kube-system etcd-k8s1 1/1 Running 0 60m 192.168.36.11 k8s1 <none> <none>
kube-system etcd-operator-59cf4cfb7c-g4t2f 1/1 Running 0 35s 10.10.1.21 k8s2 <none> <none>
kube-system kube-apiserver-k8s1 1/1 Running 0 60m 192.168.36.11 k8s1 <none> <none>
kube-system kube-controller-manager-k8s1 1/1 Running 0 60m 192.168.36.11 k8s1 <none> <none>
kube-system kube-proxy-76bzf 1/1 Running 0 58m 192.168.36.12 k8s2 <none> <none>
kube-system kube-proxy-z4r5g 1/1 Running 0 60m 192.168.36.11 k8s1 <none> <none>
kube-system kube-scheduler-k8s1 1/1 Running 0 60m 192.168.36.11 k8s1 <none> <none>
kube-system log-gatherer-75w26 1/1 Running 0 57m 192.168.36.12 k8s2 <none> <none>
kube-system log-gatherer-vb6nz 1/1 Running 0 57m 192.168.36.11 k8s1 <none> <none>
kube-system registry-adder-dmkct 1/1 Running 0 58m 192.168.36.11 k8s1 <none> <none>
kube-system registry-adder-j8dql 1/1 Running 0 58m 192.168.36.12 k8s2 <none> <none>
Stderr:
Fetching command output from pods [cilium-rhn6p cilium-w45br]
cmd: kubectl exec -n kube-system cilium-rhn6p -- cilium bpf tunnel list
Exitcode: 0
Stdout:
TUNNEL VALUE
f00d::a0a:100:0:0:0 192.168.36.12:0
10.10.1.0:0 192.168.36.12:0
Stderr:
cmd: kubectl exec -n kube-system cilium-rhn6p -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
227 Disabled Disabled 19168 k8s:io.cilium.k8s.policy.cluster=default f00d::a0a:0:0:a439 10.10.0.245 restoring
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
2780 Disabled Disabled 28258 k8s:io.cilium.k8s.policy.cluster=default f00d::a0a:0:0:1dcf 10.10.0.182 restoring
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
3845 Disabled Disabled 4 reserved:health f00d::a0a:0:0:6618 10.10.0.88 ready
Stderr:
cmd: kubectl exec -n kube-system cilium-w45br -- cilium bpf tunnel list
Exitcode: 0
Stdout:
TUNNEL VALUE
10.10.0.0:0 192.168.36.11:0
f00d::a0a:0:0:0:0 192.168.36.11:0
Stderr:
cmd: kubectl exec -n kube-system cilium-w45br -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
113 Disabled Disabled 100 k8s:io.cilium.k8s.policy.cluster=default f00d::a0a:100:0:fe5c 10.10.1.21 ready
k8s:io.cilium.k8s.policy.serviceaccount=cilium-etcd-sa
k8s:io.cilium/app=etcd-operator
k8s:io.kubernetes.pod.namespace=kube-system
436 Disabled Disabled 28258 k8s:io.cilium.k8s.policy.cluster=default f00d::a0a:100:0:83c2 10.10.1.197 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
502 Disabled Disabled 104 k8s:io.cilium.k8s.policy.cluster=default f00d::a0a:100:0:9489 10.10.1.203 ready
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
1655 Disabled Disabled 19168 k8s:io.cilium.k8s.policy.cluster=default f00d::a0a:100:0:cd7c 10.10.1.7 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
1871 Disabled Disabled 6620 k8s:io.cilium.k8s.policy.cluster=default f00d::a0a:100:0:837a 10.10.1.247 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=test-k8s2
3653 Disabled Disabled 101 k8s:app=etcd f00d::a0a:100:0:9698 10.10.1.75 ready
k8s:etcd_cluster=cilium-etcd
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.cilium/app=etcd-operator
k8s:io.kubernetes.pod.namespace=kube-system
3970 Disabled Disabled 4 reserved:health f00d::a0a:100:0:9394 10.10.1.143 ready
Stderr:
===================== Exiting AfterFailed =====================
[[ATTACHMENT|bad92085_K8sDatapathConfig_ManagedEtcd_Check_connectivity_with_managed_etcd.zip]]
STEP: Installing Cilium
STEP: Installing DNS Deployment
STEP: Performing Cilium preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium controllers preflight check
STEP: Performing Cilium health check
STEP: Performing Cilium service preflight check
STEP: Performing K8s service preflight check
STEP: Waiting for cilium-operator to be ready
STEP: Waiting for kube-dns to be ready
STEP: Running kube-dns preflight check
STEP: Performing K8s service preflight check
On v1.7 branch:
Link: https://jenkins.cilium.io/view/Cilium-v1.7/job/cilium-v1.7-standard/1669/testReport/junit/Suite-k8s-1/17/K8sDatapathConfig_ManagedEtcd_Check_connectivity_with_managed_etcd/
Archive:
bad92085_K8sDatapathConfig_ManagedEtcd_Check_connectivity_with_managed_etcd (1).zip
Stacktrace
Standard Output
Standard Error