Skip to content

CI: controller bpf-map-sync-cilium_lb_affinity_match is failing #12118

@jrajahalme

Description

@jrajahalme

Test Endpoint can still connect while Cilium is not running failed on test-gke:

11:54:38  18:54:37 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-d884n': controller bpf-map-sync-cilium_lb_affinity_match is failing: Exitcode: 0 
11:54:38  Stdout:
11:54:38   	 KVStore:                Ok   Disabled
11:54:38  	 Kubernetes:             Ok   1.14+ (v1.14.10-gke.36) [linux/amd64]
11:54:38  	 Kubernetes APIs:        ["CustomResourceDefinition", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
11:54:38  	 KubeProxyReplacement:   Probe   []   [SessionAffinity]
11:54:38  	 Cilium:                 Ok      OK
11:54:38  	 NodeMonitor:            Listening for events on 4 CPUs with 64x4096 of shared memory
11:54:38  	 Cilium health daemon:   Ok   
11:54:38  	 IPAM:                   IPv4: 2/255 allocated from 10.4.47.0/24, 
11:54:38  	 Masquerading:           IPTables
11:54:38  	 Controller Status:      17/18 healthy
11:54:38  	   Name                                    Last success   Last error   Count   Message
11:54:38  	   bpf-map-sync-cilium_lb_affinity_match   never          15s ago      23      4 map sync errors   
11:54:38  	   cilium-health-ep                        26s ago        never        0       no error            
11:54:38  	   dns-garbage-collector-job               27s ago        never        0       no error            
11:54:38  	   endpoint-1949-regeneration-recovery     never          never        0       no error            
11:54:38  	   endpoint-827-regeneration-recovery      never          never        0       no error            
11:54:38  	   k8s-heartbeat                           31s ago        never        0       no error            
11:54:38  	   mark-k8s-node-as-available              4m27s ago      never        0       no error            
11:54:38  	   metricsmap-bpf-prom-sync                6s ago         never        0       no error            
11:54:38  	   resolve-identity-1949                   4m26s ago      never        0       no error            
11:54:38  	   resolve-identity-827                    4m27s ago      never        0       no error            
11:54:38  	   sync-endpoints-and-host-ips             27s ago        never        0       no error            
11:54:38  	   sync-lb-maps-with-k8s-services          4m27s ago      never        0       no error            
11:54:38  	   sync-policymap-1949                     25s ago        never        0       no error            
11:54:38  	   sync-policymap-827                      26s ago        never        0       no error            
11:54:38  	   sync-to-k8s-ciliumendpoint (1949)       6s ago         never        0       no error            
11:54:38  	   sync-to-k8s-ciliumendpoint (827)        7s ago         never        0       no error            
11:54:38  	   template-dir-watcher                    never          never        0       no error            
11:54:38  	   update-k8s-node-annotations             4m28s ago      never        0       no error            
11:54:38  	 Proxy Status:     OK, ip 10.4.47.246, 0 redirects active on ports 10000-20000
11:54:38  	 Hubble:           Ok              Current/Max Flows: 336/4096 (8.20%), Flows/s: 1.27   Metrics: Disabled
11:54:38  	 Cluster health:   2/2 reachable   (2020-06-16T18:54:29Z)

Metadata

Metadata

Assignees

Labels

area/daemonImpacts operation of the Cilium daemon.kind/bugThis is a bug in the Cilium logic.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions