doc: Ensure ConfigMap remains compatible across 1.7 -> 1.8 upgrade#12097
doc: Ensure ConfigMap remains compatible across 1.7 -> 1.8 upgrade#12097
Conversation
|
Please set the appropriate release note label. |
008f502 to
805a16f
Compare
10827bd to
4d354ea
Compare
|
test-me-please |
joestringer
left a comment
There was a problem hiding this comment.
Shouldn't we be updating the K8sUpdates test to use this new method?
|
Some of the failures look likely related to disabling CNP updates: |
This construct still doesn't work: I think we have to write it as:
|
5f78ae9 to
282d1df
Compare
That's a fantastic idea! |
282d1df to
4d6d8b3
Compare
|
test-me-please |
d8ccd54 to
bbb3ab2
Compare
There was a problem hiding this comment.
Is this a leftover? --agent.keepDeprecatedProbes=true will not make sense for users upgrading from 1.8 to 1.9. Actually, shouldn't these instructions be version agnostic as we have dedicated sections when users are upgrading to a specific version?
There was a problem hiding this comment.
It's a hard question, it will still be needed for most users as they come from 1.7 -> 1.8 -> 1.9.
The alternative is to default on or HTTP probes but also provide an options and to default off for <1.8 upgradeCompatibility but allow users to overwrite it.
bbb3ab2 to
421d240
Compare
|
test-me-please |
421d240 to
f692c56
Compare
|
test-me-please |
|
Will rebase after tests have passed |
When running with option `--set upgradeCompatibility=1.7`, then the diff between the ConfigMaps is: ``` @@ -60,8 +60,7 @@ # # Only effective when monitor aggregation is set to "medium" or higher. monitor-aggregation-flags: all - - # ct-global-max-entries-* specifies the maximum number of connections + # bpf-ct-global-*-max specifies the maximum number of connections # supported across all endpoints, split by protocol: tcp or other. One pair # of maps uses these values for IPv4 connections, and another pair of maps # use these values for IPv6 connections. @@ -71,10 +70,9 @@ # policy drops or a change in loadbalancing decisions for a connection. # # For users upgrading from Cilium 1.2 or earlier, to minimize disruption - # during the upgrade process, comment out these options. + # during the upgrade process, set bpf-ct-global-tcp-max to 1000000. bpf-ct-global-tcp-max: "524288" bpf-ct-global-any-max: "262144" - # bpf-policy-map-max specified the maximum number of entries in endpoint # policy map (per endpoint) bpf-policy-map-max: "16384" @@ -140,9 +138,6 @@ install-iptables-rules: "true" auto-direct-node-routes: "false" kube-proxy-replacement: "probe" - enable-host-reachable-services: "false" - enable-external-ips: "false" - enable-node-port: "false" node-port-bind-protection: "true" enable-auto-protect-node-port-range: "true" enable-endpoint-health-checking: "true" ``` When running without upgradeCompatibility, the diff is: ``` @@ -43,6 +43,7 @@ # Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6 # address. enable-ipv6: "false" + enable-bpf-clock-probe: "true" # If you want cilium monitor to aggregate tracing for packets, set this level # to "low", "medium", or "maximum". The higher the level, the less packets @@ -60,24 +61,12 @@ # # Only effective when monitor aggregation is set to "medium" or higher. monitor-aggregation-flags: all - - # ct-global-max-entries-* specifies the maximum number of connections - # supported across all endpoints, split by protocol: tcp or other. One pair - # of maps uses these values for IPv4 connections, and another pair of maps - # use these values for IPv6 connections. - # - # If these values are modified, then during the next Cilium startup the - # tracking of ongoing connections may be disrupted. This may lead to brief - # policy drops or a change in loadbalancing decisions for a connection. - # - # For users upgrading from Cilium 1.2 or earlier, to minimize disruption - # during the upgrade process, comment out these options. - bpf-ct-global-tcp-max: "524288" - bpf-ct-global-any-max: "262144" - # bpf-policy-map-max specified the maximum number of entries in endpoint # policy map (per endpoint) bpf-policy-map-max: "16384" + # Specifies the ratio (0.0-1.0) of total system memory to use for dynamic + # sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps. + bpf-map-dynamic-size-ratio: "0.0025" # Pre-allocation of map entries allows per-packet latency to be reduced, at # the expense of up-front memory allocation for the entries in the maps. The @@ -136,18 +125,24 @@ wait-bpf-mount: "false" masquerade: "true" + enable-bpf-masquerade: "true" enable-xt-socket-fallback: "true" install-iptables-rules: "true" auto-direct-node-routes: "false" kube-proxy-replacement: "probe" - enable-host-reachable-services: "false" - enable-external-ips: "false" - enable-node-port: "false" node-port-bind-protection: "true" enable-auto-protect-node-port-range: "true" + enable-session-affinity: "true" + k8s-require-ipv4-pod-cidr: "true" + k8s-require-ipv6-pod-cidr: "false" enable-endpoint-health-checking: "true" enable-well-known-identities: "false" enable-remote-node-identity: "true" + operator-api-serve-addr: "127.0.0.1:9234" + ipam: "cluster-pool" + cluster-pool-ipv4-cidr: "10.0.0.0/8" + cluster-pool-ipv4-mask-size: "24" + disable-cnp-status-updates: "true" ``` Signed-off-by: Thomas Graf <thomas@cilium.io>
f692c56 to
3886ce5
Compare
|
test-me-please |
1 similar comment
|
test-me-please |
|
Single test failed: This is a completely new test failure so it is likely related to the PR. |
|
It looks it hit a newly discovered flake: #12130 It is passing locally for me. |
|
retest-4.19 |
No description provided.