-
Notifications
You must be signed in to change notification settings - Fork 3.8k
Closed
Labels
Description
Description
I think this issue only impacts CRI, since all containers within a pod share a single shim.
Each container (with tty: true) start and terminate will cause the shim to leak a console (master tty) file descriptor.
And eventually the container will fail to start due to running out of ttys.
The current workaround is increase the max value under /proc/sys/kernel/pty/max.
Steps to reproduce the issue
- Start a pod with
crictl run pod.json - Create a container
crictl create <pod> ctr.json pod.json - Start the container
crictl start <ctr> - Check shim tty fd with
lsof -p <shim-pid> | grep ptmx - Stop and delete the container
crictl stop <ctr>andcrictl rm <ctr> - Check shim tty fd again
- Repeat 2-6, the tty count keeps increasing
# cat pod.json
{
"metadata": {
"name": "nginx-sandbox",
"namespace": "default",
"attempt": 1,
"uid": "hdishd83djaidwnduwk28bcsb"
},
"log_directory": "/tmp",
"linux": {
}
}
# cat ctr.json
{
"metadata": {
"name": "busybox"
},
"image":{
"image": "docker.io/library/busybox:latest"
},
"command": [
"top"
],
"log_path":"busybox.0.log",
"linux": {
},
"stdin": true,
"tty": true
}
Describe the results you received and expected
At step 6 of the repro, the tty count should be zero.
What version of containerd are you using?
1.7 and 2.0
Any other relevant information
No response
Show configuration if it is related to CRI plugin.
No response