Skip to content

StartError (exit code 128) when pod tries to mount configmap that is being changed at same time #11983

@tooptoop4

Description

@tooptoop4

Pre-requisites

  • I have double-checked my configuration
  • I can confirm the issues exists when I tested with :latest
  • I'd like to contribute the fix myself (see contributing guide)

What happened/what you expected to happen?

a step of my workflow mounts a configmap ie

volumes:
  - name: my-py
    configMap:
      name: my-py
      defaultMode: 0744

volumeMounts:
  - name: my-py
    mountPath: /my.py
    subPath: my.py

it just so happened that i ran a workflow at the same time as the configmap was being replaced/overwritten

the workflow failed with below message and won't show the logs of that step in the argo ui. but upon looking at the container logs from external logging tool it appears the pod ran all the code successfully.

StartError (exit code 128): failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/var/lib/kubelet/pods/f22aa0b7-afdc-497c-beb3-ccb394e8428d/volume-subpaths/my-py/wait/9" to rootfs at "/mainctrfs/my.py": mount /var/lib/kubelet/pods/f22aa0b7-afdc-497c-beb3-ccb394e8428d/volume-subpaths/my-py/wait/9:/mainctrfs/my.py (via /proc/self/fd/6), flags: 0x5001, data: context="system_u:object_r:data_t:s0:c923,c1020": no such file or directory: unknown

i have run 10000s of this type of workflow successfully, it just happened that this one run above was at same time as the configmap was being updated

https://github.com/kubernetes/kubernetes/blob/2c4a863bf9b3f0d290a3f21bf7fe57d9f15a39b5/test/e2e/node/pods.go#L766-L768C66 maybe some race condition upstream

Version

3.4.11

Paste a small workflow that reproduces the issue. We must be able to run the workflow; don't enter a workflows that uses private images.

n/a

Logs from the workflow controller

kubectl logs -n argo deploy/workflow-controller | grep ${workflow}

Logs from in your workflow's wait container

kubectl logs -n argo -c wait -l workflows.argoproj.io/workflow=${workflow},workflow.argoproj.io/phase!=Succeeded

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions