Skip to content

Conversation

@corhere
Copy link
Member

@corhere corhere commented Aug 23, 2024

The runc task state machine prevents execs from being created after the init process has exited, but there are no guards against starting a created exec after the init process has exited. That leaves a small window for starting an exec to race our handling of the init process exiting. Normally this is not an issue in practice: the kernel will atomically kill all processes in a PID namespace when its "init" process terminates, and will not allow new processes to fork(2) into the PID namespace afterwards. Therefore the racing exec is guaranteed by the kernel to not be running after the init process terminates. On the other hand, when the container does not have a private PID namespace (i.e. the container's init process is not the "init" process of the container's PID namespace), the kernel does not automatically kill other container processes on init exit and will happily allow runc to start an exec process at any time. It is the runc shim's responsibility to clean up the container when the init process exits in this situation by killing all the container's remaining processes. Block execs from being started after the container's init process has exited to prevent the processes from leaking, and to avoid violating the task service's assumption that an exec can be running iff the init process is also running.

The runc task state machine prevents execs from being created after the
init process has exited, but there are no guards against starting a
created exec after the init process has exited. That leaves a small
window for starting an exec to race our handling of the init process
exiting. Normally this is not an issue in practice: the kernel will
atomically kill all processes in a PID namespace when its "init" process
terminates, and will not allow new processes to fork(2) into the PID
namespace afterwards. Therefore the racing exec is guaranteed by the
kernel to not be running after the init process terminates. On the other
hand, when the container does not have a private PID namespace (i.e. the
container's init process is not the "init" process of the container's
PID namespace), the kernel does not automatically kill other container
processes on init exit and will happily allow runc to start an exec
process at any time. It is the runc shim's responsibility to clean up
the container when the init process exits in this situation by killing
all the container's remaining processes. Block execs from being started
after the container's init process has exited to prevent the processes
from leaking, and to avoid violating the task service's assumption that
an exec can be running iff the init process is also running.

Signed-off-by: Cory Snider <csnider@mirantis.com>
@k8s-ci-robot
Copy link

Hi @corhere. Thanks for your PR.

I'm waiting for a containerd member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@dosubot dosubot bot added the area/runtime Runtime label Aug 23, 2024
@corhere
Copy link
Member Author

corhere commented Aug 23, 2024

I've taken my own stab at this. I am pretty sure that this is only a partial solution; init exit events can still get dropped if the init process exits between the exec's preStart and handleStarted in the shared-PIDns case due to the exec not early-exiting. I think that combined with #10608 we're beginning to converge on a complete solution.

/cc @laurazard

@samuelkarp
Copy link
Member

I don't think this race can only occur in a shared PID scenario. Consider the following:

  • We use s.pendingExecs[c] to count the number of pending execs for a given container
  • This is incremented during Start and decremented during handleStarted
  • The individual handleStarted has a copy of any exits that occurred between the call to preStart (where a subscription to exits was created) and handleStarted with the idea that any exits that happened while the new exec process is started would not be lost
  • We only process an init exit if s.pendingExecs[c] is exactly zero and if the init exit was recorded in the per-exec copy of the exits
  • With a high enough rate of new execs or a slow-enough system, this sequence of events can occur:
  1. Start is invoked for exec ID 1 and increments s.pendingExecs[c] to 1
  2. Start invokes preStart, creating a local exits for exec 1 subscribed to exits for the container
  3. Init process exits
  4. Start is invoked for exec ID 2 and increments s.pendingExecs[c] to 2
  5. Start invokes preStart, creating a local exits for exec 2 subscribed to exits for the container
  6. Start invokes handleStarted for exec ID 1
  7. handleStarted for exec ID 1 decrements s.pendingExecs[c] to 1, so the init exit is ignored
  8. handleStarted for exec ID 2 decrements s.pendingExecs[c] to 0, but the init exit is not present in its local copy of exits and the init exit is thus lost here

None of this presupposes a shared PID namespace but rather the race within the shim around concurrent exec bookkeeping and the init process exiting.

@samuelkarp
Copy link
Member

As we discussed on slack: can we add tracking for the init exit outside the local exits within preStart such that we don't depend on when the variable was initialized and subscribed in order to reliably receive the init exit?

Process: p,
})
if init {
s.execable[c] = true
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This map is nil

panic
panic: assignment to entry in nil map

goroutine 11 [running]:
github.com/containerd/containerd/v2/cmd/containerd-shim-runc-v2/task.(*service).preStart.func1(0xc0000a81e0, {0xd34360, 0xc000190160})
	/home/runner/work/containerd/containerd/cmd/containerd-shim-runc-v2/task/service.go:241 +0x6cb
github.com/containerd/containerd/v2/cmd/containerd-shim-runc-v2/task.(*service).Create(0xc0002001b0, {0xd2e910, 0xc000210180}, 0xc00027c0c0)
	/home/runner/work/containerd/containerd/cmd/containerd-shim-runc-v2/task/service.go:293 +0x3a4
github.com/containerd/containerd/api/runtime/task/v3.RegisterTTRPCTaskService.func2({0xd2e910, 0xc000210180}, 0xc00026c160)
	/home/runner/work/containerd/containerd/vendor/github.com/containerd/containerd/api/runtime/task/v3/shim_ttrpc.pb.go:46 +0x8c
github.com/containerd/ttrpc.defaultServerInterceptor({0xd2e910?, 0xc000210180?}, 0x7f6e6093d108?, 0x10?, 0xc00006af00?)
	/home/runner/work/containerd/containerd/vendor/github.com/containerd/ttrpc/interceptor.go:52 +0x22
github.com/containerd/ttrpc.(*serviceSet).unaryCall(0xc000010618, {0xd2e910, 0xc000210180}, 0xc000010648, 0xc0002682e0, {0xc00018e000, 0x1ea, 0x200})
	/home/runner/work/containerd/containerd/vendor/github.com/containerd/ttrpc/services.go:75 +0xe3
github.com/containerd/ttrpc.(*serviceSet).handle.func1()
	/home/runner/work/containerd/containerd/vendor/github.com/containerd/ttrpc/services.go:118 +0x158
created by github.com/containerd/ttrpc.(*serviceSet).handle in goroutine 10
	/home/runner/work/containerd/containerd/vendor/github.com/containerd/ttrpc/services.go:111 +0x14c
time="2024-08-26T23:36:47.943344153Z" level=info msg="shim disconnected" id=TestRestartMonitor_Always namespace=testing
time="2024-08-26T23:36:47.943374910Z" level=warning msg="cleaning up after shim disconnected" id=TestRestartMonitor_Always namespace=testing
time="2024-08-26T23:36:47.943386833Z" level=info msg="cleaning up dead shim" namespace=testing
time="2024-08-26T23:36:48.065999639Z" level=error msg="copy shim log" error="read /proc/self/fd/14: file already closed" namespace=testing
time="2024-08-26T23:36:48.109368103Z" level=info msg="connecting to shim TestRestartMonitor_Paused_Task" address="unix:///run/containerd/s/6edb8f958c08d0d4617d7d968277d999aaea6d6440c91387f8af295f25a27e84" namespace=testing protocol=ttrpc version=3

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Serves me right opening a PR without first smoke-testing locally 😅

@laurazard
Copy link
Member

@samuelkarp what you're describing is what I brought up here.

Indeed, a solution to that is not relying on exits for keeping track of the init exit, and instead stashing it somewhere else (per-container).

@corhere
Copy link
Member Author

corhere commented Aug 27, 2024

Closing in favour of #10647

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants