What happened?
This is happening when deploying a pod with argo workflow and there is another job running which is editing the configmap
The workflow failed with following error
StartError (exit code 128): failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/var/lib/kubelet/pods/f22aa0b7-afdc-497c-beb3-ccb394e8428d/volume-subpaths/config/wait/9" to rootfs at "/mainctrfs/my.py": mount /var/lib/kubelet/pods/f22aa0b7-afdc-497c-beb3-ccb394e8428d/volume-subpaths/my-py/wait/9:/mainctrfs/my.py (via /proc/self/fd/6), flags: 0x5001, data: context="system_u:object_r:data_t:s0:c923,c1020": no such file or directory: unknown
What did you expect to happen?
I expected the pod to come up successfully with whatever version of configmap is available
How can we reproduce it (as minimally and precisely as possible)?
This is little hard to reproduce, but to reproduce:
- Run a script or job to replace/overwrite a configmap
- at the same time try to create a pod which is mounting data from configmap with subpath
Anything else we need to know?
No response
Kubernetes version
client Version: v1.29.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.27.8-gke.1067004
Cloud provider
Details
GKE
OS version
No response
Install tools
Details
Container runtime (CRI) and version (if applicable)
Details
containerd
Related plugins (CNI, CSI, ...) and versions (if applicable)
No response
Related Issues:
argoproj/argo-workflows#11983
What happened?
This is happening when deploying a pod with argo workflow and there is another job running which is editing the configmap
The workflow failed with following error
StartError (exit code 128): failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/var/lib/kubelet/pods/f22aa0b7-afdc-497c-beb3-ccb394e8428d/volume-subpaths/config/wait/9" to rootfs at "/mainctrfs/my.py": mount /var/lib/kubelet/pods/f22aa0b7-afdc-497c-beb3-ccb394e8428d/volume-subpaths/my-py/wait/9:/mainctrfs/my.py (via /proc/self/fd/6), flags: 0x5001, data: context="system_u:object_r:data_t:s0:c923,c1020": no such file or directory: unknownWhat did you expect to happen?
I expected the pod to come up successfully with whatever version of configmap is available
How can we reproduce it (as minimally and precisely as possible)?
This is little hard to reproduce, but to reproduce:
Anything else we need to know?
No response
Kubernetes version
client Version: v1.29.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.27.8-gke.1067004
Cloud provider
Details
GKEOS version
No response
Install tools
Details
Container runtime (CRI) and version (if applicable)
Details
containerdRelated plugins (CNI, CSI, ...) and versions (if applicable)
No response
Related Issues:
argoproj/argo-workflows#11983