-
Notifications
You must be signed in to change notification settings - Fork 3k
CI flake: podman-remote: no output from container (and "does not exist in database"?) #7195
Copy link
Copy link
Closed
Labels
flakesFlakes from Continuous IntegrationFlakes from Continuous Integrationkind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.locked - please file new issue/PRAssist humans wanting to comment on an old issue or PR with locked comments.Assist humans wanting to comment on an old issue or PR with locked comments.remoteProblem is in podman-remoteProblem is in podman-remote
Description
This is a bad report. I have no reproducer nor any real sense for what's going on.
I'm seeing consistent flakes in #7111 . The failing test is always "podman run : user namespace preserved root ownership" which is simply a quick loop of podman run commands. The last set of failures all looked like:
[+0136s] # # podman-remote --url ... run --rm --user=100 --userns=keep-id quay.io/libpod/alpine_labels:latest stat -c %u:%g:%n /etc
[+0136s] # #/vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
[+0136s] # #| FAIL: run --user=100 --userns=keep-id (/etc) <<<--- these flags are not always the same
[+0136s] # #| expected: '0:0:/etc'
[+0136s] # #| actual: ''
[+0136s] # #\^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In two of the three most recent failures, in teardown, there's a podman rm -a -f that barfs with:
[+0136s] # Error: container d0ea34aaeffe07cc3e3f7f79933372a2bab825ef97e17c580eb1e1e94b2ac7e7 does not exist in database: no such container
Logs: fedora 32, fedora 31, special testing rootless.
In an even earlier run, in special_testing_rootless, there was a different error in a different test:
[+0257s] not ok 74 podman volume with --userns=keep-id
[+0257s] # $ /var/tmp/go/src/github.com/containers/podman/bin/podman-remote --url unix:/tmp/podman.fpYZ0P run --rm -v /tmp/podman_bats.LsZK3v/volume_O8zRoMsGmt:/vol:z quay.io/libpod/alpine_labels:latest stat -c %u:%s /vol/myfile
[+0257s] # read unixpacket @->/run/user/23298/libpod/tmp/socket/1bee53f8e19e2b03ff773e504fead7f5994b8be5545b315a73a3d2cef290f567/attach: read: connection reset by peer
[+0257s] # #/vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
[+0257s] # #| FAIL: w/o keep-id: stat(file in container) == root
[+0257s] # #| expected: '0:0'
[+0257s] # #| actual: 'read unixpacket @->/run/user/23298/libpod/tmp/socket/1bee53f8e19e2b03ff773e504fead7f5994b8be5545b315a73a3d2cef290f567/attach: read: connection reset by peer'
[+0257s] # #\^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The socket above is a conmon one. I don't know if this is a conmon problem.
The common factor seems to be --userns=keep-id.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
flakesFlakes from Continuous IntegrationFlakes from Continuous Integrationkind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.locked - please file new issue/PRAssist humans wanting to comment on an old issue or PR with locked comments.Assist humans wanting to comment on an old issue or PR with locked comments.remoteProblem is in podman-remoteProblem is in podman-remote