Skip to content

Use user from OCI config per default#2

Closed
saschagrunert wants to merge 1 commit intoNVIDIA:masterfrom
saschagrunert:user
Closed

Use user from OCI config per default#2
saschagrunert wants to merge 1 commit intoNVIDIA:masterfrom
saschagrunert:user

Conversation

@saschagrunert
Copy link

We have to use the user from the OCI configuration to have the right set
of user permissions inside container.

We have to use the user from the OCI configuration to have the right set
of user permissions inside container.

Signed-off-by: Sascha Grunert <sgrunert@suse.com>
@RenaudWasTaken
Copy link
Contributor

Hello @saschagrunert !

Thanks for your contribution! All the github repos are mirror of repos on gitlab, do you mind making your contribution here: https://gitlab.com/nvidia/container-toolkit/container-toolkit

Thanks!

@saschagrunert
Copy link
Author

Yes sure 👍 Thank you for the hint

@saschagrunert
Copy link
Author

@saschagrunert saschagrunert deleted the user branch April 21, 2020 08:24
cdesiniotis added a commit to cdesiniotis/nvidia-container-toolkit that referenced this pull request Oct 30, 2025
This change is required to make our nvidia runtime wrapper compliant with
the OCI runtime spec. All OCI-compliant runtimes must support the operations
documented at https://github.com/opencontainers/runtime-spec/blob/v1.2.1/runtime.md#operations.
Before this change, our nvidia runtime wrapper was not producing the expected
output when the query state operation (`state <container-id>`) was invoked
AND the nvidia kernel modules happened to not be loaded. In this case, we were
emitting an extra log message which caused the stdout of this command to not
adhere to the schema defined in the OCI runtime spec. Redirecting the log
message to stderr makes us compliant.

This issue was discovered when deploying GPU Operator 25.10.0 on nodes using cri-o.
GPU Operator 25.10.0 is the first release that installs nvidia runtime handlers
with cri-o by default, as opposed to installing an OCI hook file. When performing
a GPU driver upgrade, pods in the gpu-operator namespace would be in the
`Init:RunContainerError` state for several minutes until the new driver finished
installing -- note that no nvidia driver modules are loaded during this span of
several minutes. When inspecting the cri-o logs, we observed the following error
message:

```
level=warning msg="Error updating the container status \"16779f4cd2414a164aae56856b491f86fe0c6b803a3b4474ada2cc0864c8e028\": failed to decode container status for 16779f4cd2414a164aae56856b491f86fe0c6b803a3b4474ada2cc0864c8e028: skipThreeBytes: expect ull, error found in NVIDIA#2 byte of ...|nvidia drive|..., bigger context ...|nvidia driver modules are not yet loaded, invoking /|..." id=a4b48041-edc4-48c2-8d75-4ad03cb3d8e1 name=/runtime.v1.RuntimeService/CreateContainer
```

This error message indicates cri-o failed to get the status of the container because
it could not decode the JSON returned by the runtime handler.

Signed-off-by: Christopher Desiniotis <cdesiniotis@nvidia.com>
cdesiniotis added a commit to cdesiniotis/nvidia-container-toolkit that referenced this pull request Oct 30, 2025
This change is required to make our nvidia runtime wrapper compliant with
the OCI runtime spec. All OCI-compliant runtimes must support the operations
documented at https://github.com/opencontainers/runtime-spec/blob/v1.2.1/runtime.md#operations.
Before this change, our nvidia runtime wrapper was not producing the expected
output when the query state operation (`state <container-id>`) was invoked
AND the nvidia kernel modules happened to not be loaded. In this case, we were
emitting an extra log message which caused the stdout of this command to not
adhere to the schema defined in the OCI runtime spec. Redirecting the log
message to stderr makes us compliant.

This issue was discovered when deploying GPU Operator 25.10.0 on nodes using cri-o.
GPU Operator 25.10.0 is the first release that installs nvidia runtime handlers
with cri-o by default, as opposed to installing an OCI hook file. When performing
a GPU driver upgrade, pods in the gpu-operator namespace would be in the
`Init:RunContainerError` state for several minutes until the new driver finished
installing -- note that no nvidia driver modules are loaded during this span of
several minutes. When inspecting the cri-o logs, we observed the following error
message:

```
level=warning msg="Error updating the container status \"16779f4cd2414a164aae56856b491f86fe0c6b803a3b4474ada2cc0864c8e028\": failed to decode container status for 16779f4cd2414a164aae56856b491f86fe0c6b803a3b4474ada2cc0864c8e028: skipThreeBytes: expect ull, error found in NVIDIA#2 byte of ...|nvidia drive|..., bigger context ...|nvidia driver modules are not yet loaded, invoking /|..." id=a4b48041-edc4-48c2-8d75-4ad03cb3d8e1 name=/runtime.v1.RuntimeService/CreateContainer
```

This error message indicates cri-o failed to get the status of the container because
it could not decode the JSON returned by the runtime handler.

Signed-off-by: Christopher Desiniotis <cdesiniotis@nvidia.com>
github-actions bot pushed a commit that referenced this pull request Nov 3, 2025
This change is required to make our nvidia runtime wrapper compliant with
the OCI runtime spec. All OCI-compliant runtimes must support the operations
documented at https://github.com/opencontainers/runtime-spec/blob/v1.2.1/runtime.md#operations.
Before this change, our nvidia runtime wrapper was not producing the expected
output when the query state operation (`state <container-id>`) was invoked
AND the nvidia kernel modules happened to not be loaded. In this case, we were
emitting an extra log message which caused the stdout of this command to not
adhere to the schema defined in the OCI runtime spec. Redirecting the log
message to stderr makes us compliant.

This issue was discovered when deploying GPU Operator 25.10.0 on nodes using cri-o.
GPU Operator 25.10.0 is the first release that installs nvidia runtime handlers
with cri-o by default, as opposed to installing an OCI hook file. When performing
a GPU driver upgrade, pods in the gpu-operator namespace would be in the
`Init:RunContainerError` state for several minutes until the new driver finished
installing -- note that no nvidia driver modules are loaded during this span of
several minutes. When inspecting the cri-o logs, we observed the following error
message:

```
level=warning msg="Error updating the container status \"16779f4cd2414a164aae56856b491f86fe0c6b803a3b4474ada2cc0864c8e028\": failed to decode container status for 16779f4cd2414a164aae56856b491f86fe0c6b803a3b4474ada2cc0864c8e028: skipThreeBytes: expect ull, error found in #2 byte of ...|nvidia drive|..., bigger context ...|nvidia driver modules are not yet loaded, invoking /|..." id=a4b48041-edc4-48c2-8d75-4ad03cb3d8e1 name=/runtime.v1.RuntimeService/CreateContainer
```

This error message indicates cri-o failed to get the status of the container because
it could not decode the JSON returned by the runtime handler.

Signed-off-by: Christopher Desiniotis <cdesiniotis@nvidia.com>
(cherry picked from commit 61f9bde)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants