Skip to content

traffic policy local health check returns wrong status #11043

@pbecotte

Description

@pbecotte

Bug report

Cilium 1.7.2
K8s 1.15 on EKS

Running in AWS. Service of type LoadBalancer with an NLB allocated and externalTrafficPolicy: Local. This should cause a health check to run on each node that answers 1 or 0 depending on if there is a local instance. Today, had one service that this worked correctly (load balancer health checks only pass on the node with the pod) and another identical one that didn't. Eventually, realized it is because kubeproxy and cilium both attempt to handle this. On the nodes that kubeproxy sets up the health check, the answer is correct. On the nodes that cilium does, the answer is wrong.

(whether we should have both of those running seems like an interesting question as I resolve this- but in the mean time)

The PR that added this feature says it should return 0 if there are no local pods, but that is not the observed behavior. Ran it without kube-proxy... all the health checks always pass. Ran it without cilium- they are all correct.

Metadata

Metadata

Assignees

Labels

kind/bugThis is a bug in the Cilium logic.kind/community-reportThis was reported by a user in the Cilium community, eg via Slack.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions