fix network cleanup flake in play kube#23457
Conversation
When using service containers and play kube we create a complicated set of dependencies. First in a pod all conmon/container cgroups are part of one slice, that slice will be removed when the entire pod is stopped resulting in systemd killing all processes that were part in it. Now the issue here is around the working of stopPodIfNeeded() and stopIfOnlyInfraRemains(), once a container is cleaned up it will check if the pod should be stopped depending on the pod ExitPolicy. If this is the case it wil stop all containers in that pod. However in our flaky test we calle podman pod kill which logically killed all containers already. Thus the logic now thinks on cleanup it must stop the pod and calls into pod.stopWithTimeout(). Then there we try to stop but because all containers are already stopped it just throws errors and never gets to the point were it would call Cleanup(). So the code does not do cleanup and eventually calls removePodCgroup() which will cause all conmon and other podman cleanup processes of this pod to be killed. Thus the podman container cleanup process was likely killed while actually trying to the the proper cleanup which leaves us in a bad state. Following commands such as podman pod rm will try to the cleanup again as they see it was not completed but then fail as they are unable to recover from the partial cleanup state. Long term network cleanup needs to be more robust and ideally should be idempotent to handle cases were cleanup was killed in the middle. Fixes containers#21569 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Luap99 The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
LGTM |
I don't know, it is possible that callers that didn't call cleanup() due the falling stop call now do correctly call cleanup() so the error might appear in more places. But that should only happen on code paths that use pods. If no pods are used this code should never be called and your logs show normal containers without any pods failing too so I don't see how this would be related. |
|
I'm going to merge to get this ready for the release. |
|
The mergebot appears to be broken? |
When using service containers and play kube we create a complicated set of dependencies.
First in a pod all conmon/container cgroups are part of one slice, that slice will be removed when the entire pod is stopped resulting in systemd killing all processes that were part in it.
Now the issue here is around the working of stopPodIfNeeded() and stopIfOnlyInfraRemains(), once a container is cleaned up it will check if the pod should be stopped depending on the pod ExitPolicy. If this is the case it wil stop all containers in that pod. However in our flaky test we calle podman pod kill which logically killed all containers already. Thus the logic now thinks on cleanup it must stop the pod and calls into pod.stopWithTimeout(). Then there we try to stop but because all containers are already stopped it just throws errors and never gets to the point were it would call Cleanup(). So the code does not do cleanup and eventually calls removePodCgroup() which will cause all conmon and other podman cleanup processes of this pod to be killed.
Thus the podman container cleanup process was likely killed while actually trying to the the proper cleanup which leaves us in a bad state.
Following commands such as podman pod rm will try to the cleanup again as they see it was not completed but then fail as they are unable to recover from the partial cleanup state.
Long term network cleanup needs to be more robust and ideally should be idempotent to handle cases were cleanup was killed in the middle.
Fixes #21569
Does this PR introduce a user-facing change?