Use storage that better supports rootless overlayfs#13375
Use storage that better supports rootless overlayfs#13375openshift-merge-robot merged 1 commit intocontainers:mainfrom kousu:repair-13123
Conversation
|
Hi! I might be jumping the gun here. I wanted to be able to test my fix from containers/storage#1156, but I haven't even posted my full investigation on #13123 nor actually tested this yet, and containers/storage hasn't had a new release yet. The branch will be useful on my end just for testing, but no worries if you need to close this and instead wait for the next release of containers/storage. |
|
This is fine. |
|
@containers/podman-maintainers PTAL |
|
Thank you for the pleasant experience @rhatdan. To test this, you need to activate the overlayfs driver without using then you need to find a container with many layers. You can use In the broken case, you'll see: The working case, with this patch in, should succeed and exit with 0. That container is very very large. I'm still working on making a reduced test case. |
|
LGTM |
|
rebase and it will get merged |
overlayfs -- the kernel's version, not fuse-overlayfs -- recently learned
(as of linux 5.16.0, I believe) how to support rootless users. Previously,
rootless users had to use these storage.conf(5) settings:
* storage.driver=vfs (aka STORAGE_DRIVER=vfs), or
* storage.driver=overlay (aka STORAGE_DRIVER=overlay),
storage.options.overlay.mount_program=/usr/bin/fuse-overlayfs
(aka STORAGE_OPTS=/usr/bin/fuse-overlayfs)
Now that a third backend is available, setting only:
* storage.driver=overlay (aka STORAGE_DRIVER=overlay)
#13123 reported EXDEV errors
during the normal operation of their container. Tracing it out, the
problem turned out to be that their container was being mounted without
'userxattr'; I don't fully understand why, but mount(8) mentions this is
needed for rootless users:
> userxattr
>
> Use the "user.overlay." xattr namespace instead of "trusted.overlay.".
> This is useful for unprivileged mounting of overlayfs.
containers/storage#1156 found and fixed the issue
in podman, and this just pulls in that via
go get github.com/containers/storage@ebc90ab
go mod vendor
make vendor
Closes #13123
Signed-off-by: Nick Guenther <nick.guenther@polymtl.ca>
You've got it :) |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: giuseppe, kousu, rhatdan The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/hold pending tests passing |
|
Is there anything I can do about the test failure? It looks like it's this bug #12624? |
|
@kousu I've restarted the offending test; chances are good that it will pass on re-run. |
|
Tests are green, I've released the hold. Thank you for your work and your patience @kousu! |
|
And thanks for podman! I'm using it to help develop some ansible scripts to help. Our targets need systemd, and so it was either rent a VPS or otherwise run a new VM everytime I test, or use podman 🎉. And thanks also for the extremely quick turnaround. Everyone I've shown has commented that <24hrs is extremely fast for open source work to get done. |
|
@kousu This just showed up today. https://github.com/linux-system-roles/podman |
overlayfs -- the kernel's version, not fuse-overlayfs -- recently learned
(as of linux 5.16.0, I believe) how to support rootless users. Previously,
rootless users had to use these storage.conf(5) settings:
storage.options.overlay.mount_program=/usr/bin/fuse-overlayfs
(aka STORAGE_OPTS=overlay.mount_program=/usr/bin/fuse-overlayfs) -- and this is the current default
Now a third backend is available, setting only:
With this configuration, #13123 reported EXDEV errors during the normal operation of their container. Tracing it out, the problem turned out to be that their container was being mounted without 'userxattr'; I don't fully understand why, but mount(8) mentions this is needed for rootless users:
containers/storage#1156 found and fixed the source of the issue, and this just pulls in that via
Closes #13123