Fix native routing cidr missing flag in daemon#12180
Conversation
5d02782 to
153a639
Compare
|
test-me-please |
153a639 to
9dfa106
Compare
|
test-me-please |
|
test-gke |
There was a problem hiding this comment.
Not related to your PR, but why we didn't call the flag --cluster-cidr?
There was a problem hiding this comment.
Because it is not the cluster CIDR. The native routing space might be much larger, spanning multiple clusters or even non-k8s space.
|
Ah I noticed this on v1.7 and just assumed that it was a bad backport.. we're depending on this helm option in the v1.7 GKE GSG at the moment as well, might be a backport candidate: https://docs.cilium.io/en/v1.7/gettingstarted/k8s-install-gke/ |
That guide has |
|
runtime failed because of missing flag in the |
|
Coverage decreased (-0.04%) to 37.103% when pulling 9dfa1068cc6a198adf197c8a463352b49e496c9e on aanm:pr/fix-native-routing-cidr into a1cc34d on cilium:master. |
…fail Fixes: e7d4f5c ("daemon: validate IPv4NativeRoutingCIDR value in DaemonConfig") Signed-off-by: André Martins <andre@cilium.io>
Fixes: c496e25 ("eni: Support masquerading") Signed-off-by: André Martins <andre@cilium.io>
As we are currently running our CI with a CIDR from the Cilium-Operator, which is "10.0.0.0/16", we should set it as part of our 'nativeRoutingCIDR'. Fixes: ace902d ("helm: Enable BPF masquerading by default") Signed-off-by: André Martins <andre@cilium.io>
@pchaigno I believe it's being fixed in #12087 (comment) |
|
k8s 4.19 failed with the flake that is going to be fixed in #12179 will re push with the flag set, we only need to retest-4.19 |
9dfa106 to
ca64502
Compare
|
retest-4.19 |
|
retest-runtime |
Fixes: #12130