qa: ignore cluster warning (evicting unresponsive ...) with tasks/mgr-osd-full#65693
qa: ignore cluster warning (evicting unresponsive ...) with tasks/mgr-osd-full#65693
Conversation
…-osd-full fs/full/subvolume_ls.sh will restart ceph-mgr periodically and that does not cleanup libcephfs handles. Fixes: http://tracker.ceph.com/issues/73278 Signed-off-by: Venky Shankar <vshankar@redhat.com>
|
Why doesn’t restart clean up libcephfs? That seems…not great. |
This is nothing new afaik -- it has always been that way due to the reason that cleanup of plugins has been problematic in the manager, so they are never ever cleaned up. @batrick did some work on this to blocklist the clients by including the client addrs in the manager beacon message sent to monitor, but that isn't sufficient enough |
I haven't looked at the test in a while but there is a race between a mgr libcephfs handle mounting CephFS, registering the client, and then the beacon sent to the mons including the new client instance for blocklist. This could be a little better if the mgr created a Rados handle first, registered the client instance, and then passed that handle to Libcephfs. There would still be a race however with waiting for the |
I'm surely missing a bit here since its been a while I looked at #51169, but the libcephfs client addrs are sent to mon after the mount is done and there is still a race when the mgr restarts just before the addrs can be sent to the monitor in its beacon message. |
Yes, that's why I said we could create a Rados handle first which would let us know the client instance before establishing a session with the MDS.
Right, so blocking the return of |
Seems like fixes we really need to get in. Right now a failed ceph-mgr could keep doing things to the fs/subvolumes and that seems real bad. 😮 Doesn't have to block making QA go, but I didn't realize we had this hole and we should prioritize it... |
I have reached out to @ajarr to get the PR moving or the fs team can work on it if @ajarr agrees. |
@vshankar please feel free to take over. This is still to be addressed in PR 51169, #51169 (comment) . Thanks! |
fs/full/subvolume_ls.sh will restart ceph-mgr periodically and that does not cleanup libcephfs handles.
Fixes: http://tracker.ceph.com/issues/73278
Contribution Guidelines
To sign and title your commits, please refer to Submitting Patches to Ceph.
If you are submitting a fix for a stable branch (e.g. "quincy"), please refer to Submitting Patches to Ceph - Backports for the proper workflow.
When filling out the below checklist, you may click boxes directly in the GitHub web UI. When entering or editing the entire PR message in the GitHub web UI editor, you may also select a checklist item by adding an
xbetween the brackets:[x]. Spaces and capitalization matter when checking off items this way.Checklist
Show available Jenkins commands
jenkins test classic perfJenkins Job | Jenkins Job Definitionjenkins test crimson perfJenkins Job | Jenkins Job Definitionjenkins test signedJenkins Job | Jenkins Job Definitionjenkins test make checkJenkins Job | Jenkins Job Definitionjenkins test make check arm64Jenkins Job | Jenkins Job Definitionjenkins test submodulesJenkins Job | Jenkins Job Definitionjenkins test dashboardJenkins Job | Jenkins Job Definitionjenkins test dashboard cephadmJenkins Job | Jenkins Job Definitionjenkins test apiJenkins Job | Jenkins Job Definitionjenkins test docsReadTheDocs | Github Workflow Definitionjenkins test ceph-volume allJenkins Jobs | Jenkins Jobs Definitionjenkins test windowsJenkins Job | Jenkins Job Definitionjenkins test rook e2eJenkins Job | Jenkins Job DefinitionYou must only issue one Jenkins command per-comment. Jenkins does not understand
comments with more than one command.