fix:cri: Stable order for StatusResponse.RuntimeHandlers#12036
fix:cri: Stable order for StatusResponse.RuntimeHandlers#12036AkihiroSuda merged 3 commits intocontainerd:mainfrom
Conversation
|
Hi @EricMountain. Thanks for your PR. I'm waiting for a containerd member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
dbdbdfa to
2cc6656
Compare
Signed-off-by: Eric Mountain <eric.mountain@datadoghq.com>
The runtimeHandlers list in the response to `crictl info` has unstable ordering since commit 97eb1cd (underlying switch from list to map) that was shipped in v2.1.0. On Kubernetes nodes this causes the kubelet to update node status subresources every time the order of runtime handlers changes in the status response from containerd. The lieklihood increases with the number of runtime handlers present on nodes. In some clusters this leads to every single node sending a status update every few seconds leading to excessive Kube API server load. This change enforces stable ordering on runtime handler names. Signed-off-by: Eric Mountain <eric.mountain@datadoghq.com>
Signed-off-by: Eric Mountain <eric.mountain@datadoghq.com>
2cc6656 to
eb63b5b
Compare
|
/ok-to-test |
|
/cherry-pick release/2.1 |
|
@AkihiroSuda: new pull request created: #12054 DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
The RuntimeHandlers list in the response to the CRI Status() method (
crictl info) has unstable orderingsince commit 97eb1cd (underlying switch from list to map) that was shipped in v2.1.0.
On Kubernetes nodes this causes the kubelet to update node status subresources
every time the order of runtime handlers changes in the status response from
containerd. The likelihood increases with the number of runtime handlers present
on nodes. In some clusters this leads to every single node sending a status update
every few seconds, leading in turn to excessive Kube API server load.
This change enforces stable ordering on runtime handler names to restore the original behaviour.