Bug #57656
opendbench: write failed on handle 10009 (Resource temporarily unavailable)
0%
Description
When testing with my postgres changes:
https://github.com/ceph/ceph/labels/wip-pdonnell-testing2
I've observed failures with the kernel testing branch. The set of PRs notably introduce periodic snapshots to fs:workload which seems to catch this behavior:
2022-09-22T14:04:02.650 INFO:tasks.workunit.client.0.smithi124.stdout:[1415095] write failed on handle 10009 (Resource temporarily unavailable) 2022-09-22T14:04:02.650 INFO:tasks.workunit.client.0.smithi124.stdout:Child failed with status 1
I've not gone through all of these events but the failures seem to share: (a) testing branch; (b) snapshots turned on.
Updated by Patrick Donnelly over 3 years ago
/ceph/teuthology-archive/pdonnell-2022-09-26_19:11:10-fs-wip-pdonnell-testing-20220923.171109-distro-default-smithi/7044485/teuthology.log
/ceph/teuthology-archive/pdonnell-2022-09-26_19:11:10-fs-wip-pdonnell-testing-20220923.171109-distro-default-smithi/7044491/teuthology.log
Updated by Xiubo Li over 3 years ago
Another failure is :
2022-09-26T20:03:01.601 INFO:tasks.workunit.client.0.smithi066.stdout:Wrote -1 instead of 4096 bytes. 2022-09-26T20:03:01.601 INFO:tasks.workunit.client.0.smithi066.stdout:Probably out of disk space 2022-09-26T20:03:01.602 INFO:tasks.workunit.client.0.smithi066.stderr:write: Resource temporarily unavailable 2022-09-26T20:03:01.639 DEBUG:teuthology.orchestra.run:got remote process result: 1 2022-09-26T20:03:01.640 INFO:tasks.workunit:Stopping ['suites/ffsb.sh'] on client.0...
This is kclient, and I didn't see any error in kernel logs. And also checked the mds, osd logs, find nothing!
This should be a known issue: https://tracker.ceph.com/issues/51410
Patrick,
Could you reproduce this ?
Thanks!
Updated by Xiubo Li over 3 years ago
- Related to Bug #51410: kclient: fails to finish reconnect during MDS thrashing (testing branch) added
Updated by Xiubo Li over 3 years ago
- Status changed from In Progress to Need More Info
Today I spent more than half day to read the mds, osd side logs, but still couldn't find any suspect logs. Usually if we could some logs from kernel, it will be easy to know what exactly happen as the one in a previous tracker https://tracker.ceph.com/issues/54461.
I tried but couldn't reproduce it locally! Let's wait new ones to see whether could we get any useful info then.
Updated by Venky Shankar about 3 years ago
Updated by Patrick Donnelly almost 2 years ago
/teuthology/pdonnell-2024-03-24_04:56:01-fs-wip-batrick-testing-20240323.003144-squid-distro-default-smithi/7619048/teuthology.log
Updated by Venky Shankar almost 2 years ago
This shows up on almost every run. Latest: https://pulpito.ceph.com/yuriw-2024-04-01_20:57:46-fs-wip-yuri3-testing-2024-04-01-0837-squid-distro-default-smithi/7634520/
Nothing in the kernel ring buffer too.
Updated by Venky Shankar almost 2 years ago
- Related to Bug #65604: dbench.sh workload times out after 3h when run with-quiescer added
Updated by Xiubo Li almost 2 years ago
- Status changed from Need More Info to In Progress
Venky Shankar wrote in #note-9:
This shows up on almost every run. Latest: https://pulpito.ceph.com/yuriw-2024-04-01_20:57:46-fs-wip-yuri3-testing-2024-04-01-0837-squid-distro-default-smithi/7634520/
Nothing in the kernel ring buffer too.
2024-04-02T00:47:41.423 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:41 smithi181 ceph-mon[36358]: Health check failed: 1 MDSs report damaged metadata (MDS_DAMAGE) 2024-04-02T00:47:41.423 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:41 smithi181 ceph-mon[36358]: mds.? [v2:172.21.15.150:6836/1118918453,v1:172.21.15.150:6837/1118918453] up:active 2024-04-02T00:47:41.424 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:41 smithi181 ceph-mon[36358]: fsmap cephfs:5 5 up:standby-replay 2 up:standby 5 up:active 2024-04-02T00:47:41.482 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:41 smithi150 ceph-mon[36451]: Health check failed: 1 MDSs report damaged metadata (MDS_DAMAGE) 2024-04-02T00:47:41.483 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:41 smithi150 ceph-mon[36451]: mds.? [v2:172.21.15.150:6836/1118918453,v1:172.21.15.150:6837/1118918453] up:active 2024-04-02T00:47:41.483 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:41 smithi150 ceph-mon[36451]: fsmap cephfs:5 5 up:standby-replay 2 up:standby 5 up:active 2024-04-02T00:47:41.526 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:41 smithi110 ceph-cf707428-f088-11ee-b647-cb9ed24678a4-mon-a[32135]: 2024-04-02T00:47:41.102+0000 7f4f6081c700 -1 log_channel(cluster) log [ERR] : Health check failed: 1 MDSs report damaged metadata (MDS_DAMAGE) 2024-04-02T00:47:41.526 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:41 smithi110 ceph-mon[32139]: Health check failed: 1 MDSs report damaged metadata (MDS_DAMAGE) 2024-04-02T00:47:41.526 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:41 smithi110 ceph-mon[32139]: mds.? [v2:172.21.15.150:6836/1118918453,v1:172.21.15.150:6837/1118918453] up:active 2024-04-02T00:47:41.526 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:41 smithi110 ceph-mon[32139]: fsmap cephfs:5 5 up:standby-replay 2 up:standby 5 up:active 2024-04-02T00:47:42.070 INFO:tasks.workunit.client.0.smithi110.stdout: 1 35340 4.51 MB/sec warmup 46 sec latency 40.098 ms
I think this should be the cause of the failure, I need to figure out why this could happen in MDS.
Updated by Xiubo Li almost 2 years ago · Edited
2024-04-02T00:47:35.888+0000 7f405285e700 10 MDSIOContextBase::complete: 19MDSIOContextWrapper
2024-04-02T00:47:35.888+0000 7f405285e700 10 MDSContext::complete: 19MDSIOContextWrapper
2024-04-02T00:47:35.888+0000 7f405285e700 20 mds.3.cache.ino(0x1000000020d) ondisk_read_retval: 0
2024-04-02T00:47:35.888+0000 7f405285e700 10 mds.3.cache.ino(0x1000000020d) decoded 412 bytes of backtrace successfully
2024-04-02T00:47:35.888+0000 7f405285e700 10 mds.3.cache.ino(0x1000000020d) scrub: inotable ino = 0x1000000020d
2024-04-02T00:47:35.888+0000 7f405285e700 10 mds.3.cache.ino(0x1000000020d) scrub: inotable free says 0
2024-04-02T00:47:35.888+0000 7f405285e700 10 MDSContext::complete: 16C_InodeValidated
2024-04-02T00:47:35.888+0000 7f405285e700 0 log_channel(cluster) log [WRN] : Scrub error on inode 0x1000000020d (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/EXCEL/PCMAGCD.XLS) see mds.k log and `damage ls` output for details
2024-04-02T00:47:35.888+0000 7f405285e700 -1 mds.3.scrubstack _validate_inode_done scrub error on inode [inode 0x1000000020d [4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/EXCEL/PCMAGCD.XLS auth v729 ap=1 s=260096 n(v0 rc2024-04-02T00:47:35.386273+0000 b260096 1=1+0) (iauth excl) (ifile excl) (ixattr excl) cr={25201=0-4194304@3} caps={25201=pAsxLsXsxFsxcrwb/pAsxXsxFsxcrwb@52},l=25201 | ptrwaiter=0 request=0 lock=0 importingcaps=0 caps=1 replicated=0 dirty=1 waiter=0 authpin=1 scrubqueue=0 0x55984668f080]: {"performed_validation":true,"passed_validation":false,"backtrace":{"checked":true,"passed":false,"read_ret_val":0,"ondisk_value":"(3)0x1000000020d:[<0x1000000020c/PCMAGCD.XLS v2>,<0x10000000009/EXCEL v3910>,<0x10000000008/~dmtmp v49>,<0x10000000007/client0 v33>,<0x10000000006/clients v23>,<0x10000000005/tmp v23>,<0x100000001fb/client.0 v24>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v40>,<0x10000000001/sv_1 v61>,<0x10000000000/qa v49>,<0x1/volumes v44>]//[]","memoryvalue":"(3)0x1000000020d:[<0x1000000020c/PCMAGCD.XLS v729>,<0x10000000009/EXCEL v4776>,<0x10000000008/~dmtmp v379>,<0x10000000007/client0 v88>,<0x10000000006/clients v57>,<0x10000000005/tmp v57>,<0x100000001fb/client.0 v52>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v26>,<0x10000000001/sv_1 v89>,<0x10000000000/qa v37>,<0x1/volumes v70>]//[]","error_str":"On-disk backtrace is divergent or newer"},"raw_stats":{"checked":false,"passed":false,"read_ret_val":0,"ondisk_value.dirstat":"f()","ondisk_value.rstat":"n()","memory_value.dirstat":"f()","memory_value.rstat":"n()","error_str":""},"return_code":0}
2024-04-02T00:47:35.888+0000 7f405285e700 20 mds.3.cache.ino(0x1000000020d) scrub_finished
2024-04-02T00:47:35.888+0000 7f405285e700 20 mds.3.scrubstack kick_off_scrubs: state=RUNNING
2024-04-02T00:47:35.888+0000 7f405285e700 20 mds.3.scrubstack kick_off_scrubs entering with 4 in progress and 3 in the stack
For the backtrace one parent on disk is newer than the one on mds cache:
On disk: <0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v40> MDS cache: <0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v26>
And then scrub failed by reporting error.
Updated by Xiubo Li almost 2 years ago · Edited
There are more:
2024-04-02T00:47:36.041 INFO:teuthology.orchestra.run.smithi110.stderr:2024-04-02T00:47:36.029+0000 7f07669b6640 1 -- 172.21.15.110:0/1478999961 shutdown_connections
2024-04-02T00:47:36.041 INFO:teuthology.orchestra.run.smithi110.stderr:2024-04-02T00:47:36.029+0000 7f07669b6640 1 -- 172.21.15.110:0/1478999961 wait complete.
2024-04-02T00:47:36.041 INFO:tasks.cephfs.filesystem:scrub status for tag:fd80e962-24be-4041-840a-c361b90f5a7f - {'path': '/', 'tag': 'fd80e962-24be-4041-840a-c361b90f5a7f', 'options': 'recursive,force'}
2024-04-02T00:47:36.066 INFO:tasks.workunit.client.0.smithi110.stdout: 1 24989 3.84 MB/sec warmup 40 sec latency 12.089 ms
2024-04-02T00:47:36.173 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:36 smithi181 ceph-mon[36358]: pgmap v56: 97 pgs: 97 active+clean; 28 MiB data, 518 MiB used, 1.0 TiB / 1.0 TiB avail; 221 KiB/s rd, 321 KiB/s wr, 86 op/s
2024-04-02T00:47:36.174 INFO:journalctl@ceph.mds.i.smithi181.stdout:Apr 02 00:47:35 smithi181 ceph-cf707428-f088-11ee-b647-cb9ed24678a4-mds-i[67775]: 2024-04-02T00:47:35.890+0000 7f58b4f98700 -1 mds.1.scrubstack _validate_inode_done scrub error on inode [inode 0x10000000242 [4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/SEED/MEDIUM.FIL auth v29 ap=1 s=1048576 n(v0 rc2024-04-02T00:47:20.808519+0000 b1048576 1=1+0) (iauth excl) (ifile excl) (ixattr excl) cr={25201=0-4194304@3} caps={25201=pAsxLsXsxFsxcrwb/pAsxXsxFsxcrwb@5},l=25201 | ptrwaiter=0 request=0 lock=0 importingcaps=0 caps=1 replicated=0 dirty=1 authpin=1 scrubqueue=0 0x55fa9ea39b80]: {"performed_validation":true,"passed_validation":false,"backtrace":{"checked":true,"passed":false,"read_ret_val":0,"ondisk_value":"(3)0x10000000242:[<0x10000000240/MEDIUM.FIL v4>,<0x10000000009/SEED v120>,<0x10000000008/~dmtmp v49>,<0x10000000007/client0 v33>,<0x10000000006/clients v23>,<0x10000000005/tmp v23>,<0x100000001fb/client.0 v24>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v40>,<0x10000000001/sv_1 v61>,<0x10000000000/qa v49>,<0x1/volumes v44>]//[]","memoryvalue":"(3)0x10000000242:[<0x10000000240/MEDIUM.FIL v29>,<0x10000000009/SEED v4546>,<0x10000000008/~dmtmp v379>,<0x10000000007/client0 v88>,<0x10000000006/clients v57>,<0x10000000005/tmp v57>,<0x100000001fb/client.0 v52>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v26>,<0x10000000001/sv_1 v89>,<0x10000000000/qa v37>,<0x1/volumes v70>]//[]","error_str":"On-disk backtrace is divergent or newer"},"raw_stats":{"checked":false,"passed":false,"read_ret_val":0,"ondisk_value.dirstat":"f()","ondisk_value.rstat":"n()","memory_value.dirstat":"f()","memory_value.rstat":"n()","error_str":""},"return_code":0}
2024-04-02T00:47:36.174 INFO:journalctl@ceph.mds.i.smithi181.stdout:Apr 02 00:47:35 smithi181 ceph-cf707428-f088-11ee-b647-cb9ed24678a4-mds-i[67775]: 2024-04-02T00:47:35.890+0000 7f58b4f98700 -1 mds.1.scrubstack _validate_inode_done scrub error on inode [inode 0x10000000243 [4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/SEED/LARGE.FIL auth v31 ap=1 s=1867776 n(v0 rc2024-04-02T00:47:20.808519+0000 b1867776 1=1+0) (iauth excl) (ifile excl) (ixattr excl) cr={25201=0-4194304@3} caps={25201=pAsxLsXsxFsxcrwb/pAsxXsxFsxcrwb@5},l=25201 | ptrwaiter=0 request=0 lock=0 importingcaps=0 caps=1 replicated=0 dirty=1 authpin=1 scrubqueue=0 0x55fa9ea3a100]: {"performed_validation":true,"passed_validation":false,"backtrace":{"checked":true,"passed":false,"read_ret_val":0,"ondisk_value":"(3)0x10000000243:[<0x10000000240/LARGE.FIL v6>,<0x10000000009/SEED v120>,<0x10000000008/~dmtmp v49>,<0x10000000007/client0 v33>,<0x10000000006/clients v23>,<0x10000000005/tmp v23>,<0x100000001fb/client.0 v24>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v40>,<0x10000000001/sv_1 v61>,<0x10000000000/qa v49>,<0x1/volumes v44>]//[]","memoryvalue":"(3)0x10000000243:[<0x10000000240/LARGE.FIL v31>,<0x10000000009/SEED v4546>,<0x10000000008/~dmtmp v379>,<0x10000000007/client0 v88>,<0x10000000006/clients v57>,<0x10000000005/tmp v57>,<0x100000001fb/client.0 v52>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v26>,<0x10000000001/sv_1 v89>,<0x10000000000/qa v37>,<0x1/volumes v70>]//[]","error_str":"On-disk backtrace is divergent or newer"},"raw_stats":{"checked":false,"passed":false,"read_ret_val":0,"ondisk_value.dirstat":"f()","ondisk_value.rstat":"n()","memory_value.dirstat":"f()","memory_value.rstat":"n()","error_str":""},"return_code":0}
2024-04-02T00:47:36.174 INFO:journalctl@ceph.mds.i.smithi181.stdout:Apr 02 00:47:35 smithi181 ceph-cf707428-f088-11ee-b647-cb9ed24678a4-mds-i[67775]: 2024-04-02T00:47:35.890+0000 7f58b4f98700 -1 mds.1.scrubstack _validate_inode_done scrub error on inode [inode 0x10000000241 [4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/SEED/SMALL.FIL auth v27 ap=1 s=327680 n(v0 rc2024-04-02T00:47:20.808519+0000 b327680 1=1+0) (iauth excl) (ifile excl) (ixattr excl) cr={25201=0-4194304@3} caps={25201=pAsxLsXsxFsxcrwb/pAsxXsxFsxcrwb@5},l=25201 | ptrwaiter=0 request=0 lock=0 importingcaps=0 caps=1 replicated=0 dirty=1 authpin=1 scrubqueue=0 0x55fa9ea39600]: {"performed_validation":true,"passed_validation":false,"backtrace":{"checked":true,"passed":false,"read_ret_val":0,"ondisk_value":"(3)0x10000000241:[<0x10000000240/SMALL.FIL v2>,<0x10000000009/SEED v120>,<0x10000000008/~dmtmp v49>,<0x10000000007/client0 v33>,<0x10000000006/clients v23>,<0x10000000005/tmp v23>,<0x100000001fb/client.0 v24>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v40>,<0x10000000001/sv_1 v61>,<0x10000000000/qa v49>,<0x1/volumes v44>]//[]","memoryvalue":"(3)0x10000000241:[<0x10000000240/SMALL.FIL v27>,<0x10000000009/SEED v4546>,<0x10000000008/~dmtmp v379>,<0x10000000007/client0 v88>,<0x10000000006/clients v57>,<0x10000000005/tmp v57>,<0x100000001fb/client.0 v52>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v26>,<0x10000000001/sv_1 v89>,<0x10000000000/qa v37>,<0x1/volumes v70>]//[]","error_str":"On-disk backtrace is divergent or newer"},"raw_stats":{"checked":false,"passed":false,"read_ret_val":0,"ondisk_value.dirstat":"f()","ondisk_value.rstat":"n()","memory_value.dirstat":"f()","memory_value.rstat":"n()","error_str":""},"return_code":0}
2024-04-02T00:47:36.233 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:36 smithi150 ceph-mon[36451]: pgmap v56: 97 pgs: 97 active+clean; 28 MiB data, 518 MiB used, 1.0 TiB / 1.0 TiB avail; 221 KiB/s rd, 321 KiB/s wr, 86 op/s
2024-04-02T00:47:36.233 INFO:journalctl@ceph.mds.h.smithi150.stdout:Apr 02 00:47:35 smithi150 ceph-cf707428-f088-11ee-b647-cb9ed24678a4-mds-h[66731]: 2024-04-02T00:47:35.888+0000 7f9654430700 -1 mds.2.scrubstack _validate_inode_done scrub error on inode [inode 0x1000000020a [4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORD/BASEMACH.DOC auth v402 ap=1 s=260096 n(v0 rc2024-04-02T00:46:56.115936+0000 b260096 1=1+0) (iauth excl) (ifile excl) (ixattr excl) cr={25201=0-4194304@3} caps={25201=pAsxLsXsxFsxcrwb/pAsxXsxFsxcrwb@35},l=25201 | ptrwaiter=0 lock=0 importingcaps=0 caps=1 replicated=0 dirty=1 authpin=1 scrubqueue=0 0x563b4718f180]: {"performed_validation":true,"passed_validation":false,"backtrace":{"checked":true,"passed":false,"read_ret_val":0,"ondisk_value":"(3)0x1000000020a:[<0x10000000208/BASEMACH.DOC v4>,<0x10000000009/WORD v3886>,<0x10000000008/~dmtmp v49>,<0x10000000007/client0 v33>,<0x10000000006/clients v23>,<0x10000000005/tmp v23>,<0x100000001fb/client.0 v24>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v40>,<0x10000000001/sv_1 v61>,<0x10000000000/qa v49>,<0x1/volumes v44>]//[]","memoryvalue":"(3)0x1000000020a:[<0x10000000208/BASEMACH.DOC v402>,<0x10000000009/WORD v4594>,<0x10000000008/~dmtmp v379>,<0x10000000007/client0 v88>,<0x10000000006/clients v57>,<0x10000000005/tmp v57>,<0x100000001fb/client.0 v52>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v26>,<0x10000000001/sv_1 v89>,<0x10000000000/qa v37>,<0x1/volumes v70>]//[]","error_str":"On-disk backtrace is divergent or newer"},"raw_stats":{"checked":false,"passed":false,"read_ret_val":0,"ondisk_value.dirstat":"f()","ondisk_value.rstat":"n()","memory_value.dirstat":"f()","memory_value.rstat":"n()","error_str":""},"return_code":0}
2024-04-02T00:47:36.233 INFO:journalctl@ceph.mds.h.smithi150.stdout:Apr 02 00:47:35 smithi150 ceph-cf707428-f088-11ee-b647-cb9ed24678a4-mds-h[66731]: 2024-04-02T00:47:35.888+0000 7f9654430700 -1 mds.2.scrubstack _validate_inode_done scrub error on inode [inode 0x1000000020b [4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORD/FACTS.DOC auth v417 ap=1 s=260096 n(v0 rc2024-04-02T00:46:56.121936+0000 b260096 1=1+0) (iauth excl) (ifile excl) (ixattr excl) cr={25201=0-4194304@3} caps={25201=pAsxLsXsxFsxcrwb/pAsxXsxFsxcrwb@35},l=25201 | ptrwaiter=0 lock=0 importingcaps=0 caps=1 replicated=0 dirty=1 authpin=1 scrubqueue=0 0x563b471ee580]: {"performed_validation":true,"passed_validation":false,"backtrace":{"checked":true,"passed":false,"read_ret_val":0,"ondisk_value":"(3)0x1000000020b:[<0x10000000208/FACTS.DOC v6>,<0x10000000009/WORD v3886>,<0x10000000008/~dmtmp v49>,<0x10000000007/client0 v33>,<0x10000000006/clients v23>,<0x10000000005/tmp v23>,<0x100000001fb/client.0 v24>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v40>,<0x10000000001/sv_1 v61>,<0x10000000000/qa v49>,<0x1/volumes v44>]//[]","memoryvalue":"(3)0x1000000020b:[<0x10000000208/FACTS.DOC v417>,<0x10000000009/WORD v4594>,<0x10000000008/~dmtmp v379>,<0x10000000007/client0 v88>,<0x10000000006/clients v57>,<0x10000000005/tmp v57>,<0x100000001fb/client.0 v52>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v26>,<0x10000000001/sv_1 v89>,<0x10000000000/qa v37>,<0x1/volumes v70>]//[]","error_str":"On-disk backtrace is divergent or newer"},"raw_stats":{"checked":false,"passed":false,"read_ret_val":0,"ondisk_value.dirstat":"f()","ondisk_value.rstat":"n()","memory_value.dirstat":"f()","memory_value.rstat":"n()","error_str":""},"return_code":0}
2024-04-02T00:47:36.234 INFO:journalctl@ceph.mds.k.smithi150.stdout:Apr 02 00:47:35 smithi150 ceph-cf707428-f088-11ee-b647-cb9ed24678a4-mds-k[66111]: 2024-04-02T00:47:35.888+0000 7f405285e700 -1 mds.3.scrubstack _validate_inode_done scrub error on inode [inode 0x1000000020d [4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/EXCEL/PCMAGCD.XLS auth v729 ap=1 s=260096 n(v0 rc2024-04-02T00:47:35.386273+0000 b260096 1=1+0) (iauth excl) (ifile excl) (ixattr excl) cr={25201=0-4194304@3} caps={25201=pAsxLsXsxFsxcrwb/pAsxXsxFsxcrwb@52},l=25201 | ptrwaiter=0 request=0 lock=0 importingcaps=0 caps=1 replicated=0 dirty=1 waiter=0 authpin=1 scrubqueue=0 0x55984668f080]: {"performed_validation":true,"passed_validation":false,"backtrace":{"checked":true,"passed":false,"read_ret_val":0,"ondisk_value":"(3)0x1000000020d:[<0x1000000020c/PCMAGCD.XLS v2>,<0x10000000009/EXCEL v3910>,<0x10000000008/~dmtmp v49>,<0x10000000007/client0 v33>,<0x10000000006/clients v23>,<0x10000000005/tmp v23>,<0x100000001fb/client.0 v24>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v40>,<0x10000000001/sv_1 v61>,<0x10000000000/qa v49>,<0x1/volumes v44>]//[]","memoryvalue":"(3)0x1000000020d:[<0x1000000020c/PCMAGCD.XLS v729>,<0x10000000009/EXCEL v4776>,<0x10000000008/~dmtmp v379>,<0x10000000007/client0 v88>,<0x10000000006/clients v57>,<0x10000000005/tmp v57>,<0x100000001fb/client.0 v52>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v26>,<0x10000000001/sv_1 v89>,<0x10000000000/qa v37>,<0x1/volumes v70>]//[]","error_str":"On-disk backtrace is divergent or newer"},"raw_stats":{"checked":false,"passed":false,"read_ret_val":0,"ondisk_value.dirstat":"f()","ondisk_value.rstat":"n()","memory_value.dirstat":"f()","memory_value.rstat":"n()","error_str":""},"return_code":0}
2024-04-02T00:47:36.234 INFO:journalctl@ceph.mds.k.smithi150.stdout:Apr 02 00:47:35 smithi150 ceph-cf707428-f088-11ee-b647-cb9ed24678a4-mds-k[66111]: 2024-04-02T00:47:35.888+0000 7f405285e700 -1 mds.3.scrubstack _validate_inode_done scrub error on inode [inode 0x1000000020e [4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/EXCEL/SALES.XLS auth v814 ap=1 s=260096 n(v0 rc2024-04-02T00:47:35.560270+0000 b260096 1=1+0) (iauth excl) (ifile excl) (ixattr excl) cr={25201=0-4194304@3} caps={25201=pAsxLsXsxFsxcrwb/pAsxXsxFsxcrwb@52},l=25201 | ptrwaiter=0 request=0 lock=0 importingcaps=0 caps=1 replicated=0 dirty=1 waiter=0 authpin=1 scrubqueue=0 0x55984668f600]: {"performed_validation":true,"passed_validation":false,"backtrace":{"checked":true,"passed":false,"read_ret_val":0,"ondisk_value":"(3)0x1000000020e:[<0x1000000020c/SALES.XLS v4>,<0x10000000009/EXCEL v3910>,<0x10000000008/~dmtmp v49>,<0x10000000007/client0 v33>,<0x10000000006/clients v23>,<0x10000000005/tmp v23>,<0x100000001fb/client.0 v24>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v40>,<0x10000000001/sv_1 v61>,<0x10000000000/qa v49>,<0x1/volumes v44>]//[]","memoryvalue":"(3)0x1000000020e:[<0x1000000020c/SALES.XLS v814>,<0x10000000009/EXCEL v4776>,<0x10000000008/~dmtmp v379>,<0x10000000007/client0 v88>,<0x10000000006/clients v57>,<0x10000000005/tmp v57>,<0x100000001fb/client.0 v52>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v26>,<0x10000000001/sv_1 v89>,<0x10000000000/qa v37>,<0x1/volumes v70>]//[]","error_str":"On-disk backtrace is divergent or newer"},"raw_stats":{"checked":false,"passed":false,"read_ret_val":0,"ondisk_value.dirstat":"f()","ondisk_value.rstat":"n()","memory_value.dirstat":"f()","memory_value.rstat":"n()","error_str":""},"return_code":0}
2024-04-02T00:47:36.234 INFO:journalctl@ceph.mds.h.smithi150.stdout:Apr 02 00:47:35 smithi150 ceph-cf707428-f088-11ee-b647-cb9ed24678a4-mds-h[66731]: 2024-04-02T00:47:35.890+0000 7f9654430700 -1 mds.2.scrubstack _validate_inode_done scrub error on inode [inode 0x10000000209 [4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORD/CHAP10.DOC auth v380 ap=1 s=260096 n(v0 rc2024-04-02T00:46:56.108936+0000 b260096 1=1+0) (iauth excl) (ifile excl) (ixattr excl) cr={25201=0-4194304@3} caps={25201=pAsxLsXsxFsxcrwb/pAsxXsxFsxcrwb@35},l=25201 | ptrwaiter=0 lock=0 importingcaps=0 caps=1 replicated=0 dirty=1 authpin=1 scrubqueue=0 0x563b471eeb00]: {"performed_validation":true,"passed_validation":false,"backtrace":{"checked":true,"passed":false,"read_ret_val":0,"ondisk_value":"(3)0x10000000209:[<0x10000000208/CHAP10.DOC v2>,<0x10000000009/WORD v3886>,<0x10000000008/~dmtmp v49>,<0x10000000007/client0 v33>,<0x10000000006/clients v23>,<0x10000000005/tmp v23>,<0x100000001fb/client.0 v24>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v40>,<0x10000000001/sv_1 v61>,<0x10000000000/qa v49>,<0x1/volumes v44>]//[]","memoryvalue":"(3)0x10000000209:[<0x10000000208/CHAP10.DOC v380>,<0x10000000009/WORD v4594>,<0x10000000008/~dmtmp v379>,<0x10000000007/client0 v88>,<0x10000000006/clients v57>,<0x10000000005/tmp v57>,<0x100000001fb/client.0 v52>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v26>,<0x10000000001/sv_1 v89>,<0x10000000000/qa v37>,<0x1/volumes v70>]//[]","error_str":"On-disk backtrace is divergent or newer"},"raw_stats":{"checked":false,"passed":false,"read_ret_val":0,"ondisk_value.dirstat":"f()","ondisk_value.rstat":"n()","memory_value.dirstat":"f()","memory_value.rstat":"n()","error_str":""},"return_code":0}
2024-04-02T00:47:36.234 INFO:journalctl@ceph.mds.h.smithi150.stdout:Apr 02 00:47:35 smithi150 ceph-cf707428-f088-11ee-b647-cb9ed24678a4-mds-h[66731]: 2024-04-02T00:47:35.895+0000 7f9654430700 -1 mds.2.scrubstack _validate_inode_done scrub error on inode [inode 0x10000000226 [4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/PM/T1.XLS auth v960 ap=1 s=260096 n(v0 rc2024-04-02T00:46:56.230934+0000 b260096 1=1+0) | ptrwaiter=0 lock=0 importingcaps=0 caps=0 replicated=0 dirty=1 authpin=1 scrubqueue=0 0x563b471ef080]: {"performed_validation":true,"passed_validation":false,"backtrace":{"checked":true,"passed":false,"read_ret_val":0,"ondisk_value":"(3)0x10000000226:[<0x10000000225/T1.XLS v2>,<0x10000000009/PM v3958>,<0x10000000008/~dmtmp v49>,<0x10000000007/client0 v33>,<0x10000000006/clients v23>,<0x10000000005/tmp v23>,<0x100000001fb/client.0 v24>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v40>,<0x10000000001/sv_1 v61>,<0x10000000000/qa v49>,<0x1/volumes v44>]//[]","memoryvalue":"(3)0x10000000226:[<0x10000000225/T1.XLS v960>,<0x10000000009/PM v4509>,<0x10000000008/~dmtmp v379>,<0x10000000007/client0 v88>,<0x10000000006/clients v57>,<0x10000000005/tmp v57>,<0x100000001fb/client.0 v52>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v26>,<0x10000000001/sv_1 v89>,<0x10000000000/qa v37>,<0x1/volumes v70>]//[]","error_str":"On-disk backtrace is divergent or newer"},"raw_stats":{"checked":false,"passed":false,"read_ret_val":0,"ondisk_value.dirstat":"f()","ondisk_value.rstat":"n()","memory_value.dirstat":"f()","memory_value.rstat":"n()","error_str":""},"return_code":0}
2024-04-02T00:47:36.234 INFO:journalctl@ceph.mds.k.smithi150.stdout:Apr 02 00:47:35 smithi150 ceph-cf707428-f088-11ee-b647-cb9ed24678a4-mds-k[66111]: 2024-04-02T00:47:35.892+0000 7f405285e700 -1 mds.3.scrubstack _validate_inode_done scrub error on inode [inode 0x10000000222 [4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORDPRO/BENCHS.LWP auth v494 ap=1 s=260096 n(v0 rc2024-04-02T00:46:56.215935+0000 b260096 1=1+0) (iauth excl) (ifile excl) (ixattr excl) cr={25201=0-4194304@3} caps={25201=pAsxLsXsxFsxcrwb/pAsxXsxFsxcrwb@52},l=25201 | ptrwaiter=0 request=0 lock=0 importingcaps=0 caps=1 replicated=0 dirty=1 authpin=1 scrubqueue=0 0x5598464a6c00]: {"performed_validation":true,"passed_validation":false,"backtrace":{"checked":true,"passed":false,"read_ret_val":0,"ondisk_value":"(3)0x10000000222:[<0x10000000221/BENCHS.LWP v2>,<0x10000000009/WORDPRO v3954>,<0x10000000008/~dmtmp v49>,<0x10000000007/client0 v33>,<0x10000000006/clients v23>,<0x10000000005/tmp v23>,<0x100000001fb/client.0 v24>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v40>,<0x10000000001/sv_1 v61>,<0x10000000000/qa v49>,<0x1/volumes v44>]//[]","memoryvalue":"(3)0x10000000222:[<0x10000000221/BENCHS.LWP v494>,<0x10000000009/WORDPRO v4818>,<0x10000000008/~dmtmp v379>,<0x10000000007/client0 v88>,<0x10000000006/clients v57>,<0x10000000005/tmp v57>,<0x100000001fb/client.0 v52>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v26>,<0x10000000001/sv_1 v89>,<0x10000000000/qa v37>,<0x1/volumes v70>]//[]","error_str":"On-disk backtrace is divergent or newer"},"raw_stats":{"checked":false,"passed":false,"read_ret_val":0,"ondisk_value.dirstat":"f()","ondisk_value.rstat":"n()","memory_value.dirstat":"f()","memory_value.rstat":"n()","error_str":""},"return_code":0}
2024-04-02T00:47:36.235 INFO:journalctl@ceph.mds.k.smithi150.stdout:Apr 02 00:47:35 smithi150 ceph-cf707428-f088-11ee-b647-cb9ed24678a4-mds-k[66111]: 2024-04-02T00:47:35.892+0000 7f405285e700 -1 mds.3.scrubstack _validate_inode_done scrub error on inode [inode 0x10000000223 [4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORDPRO/RESULTS.XLS auth v637 ap=1 s=260096 n(v0 rc2024-04-02T00:46:56.221934+0000 b260096 1=1+0) (iauth excl) (ifile excl) (ixattr excl) cr={25201=0-4194304@3} caps={25201=pAsxLsXsxFsxcrwb/pAsxXsxFsxcrwb@28},l=25201 | ptrwaiter=0 request=0 lock=0 importingcaps=0 caps=1 replicated=0 dirty=1 waiter=0 authpin=1 scrubqueue=0 0x55984668e580]: {"performed_validation":true,"passed_validation":false,"backtrace":{"checked":true,"passed":false,"read_ret_val":0,"ondisk_value":"(3)0x10000000223:[<0x10000000221/RESULTS.XLS v4>,<0x10000000009/WORDPRO v3954>,<0x10000000008/~dmtmp v49>,<0x10000000007/client0 v33>,<0x10000000006/clients v23>,<0x10000000005/tmp v23>,<0x100000001fb/client.0 v24>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v40>,<0x10000000001/sv_1 v61>,<0x10000000000/qa v49>,<0x1/volumes v44>]//[]","memoryvalue":"(3)0x10000000223:[<0x10000000221/RESULTS.XLS v637>,<0x10000000009/WORDPRO v4818>,<0x10000000008/~dmtmp v379>,<0x10000000007/client0 v88>,<0x10000000006/clients v57>,<0x10000000005/tmp v57>,<0x100000001fb/client.0 v52>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v26>,<0x10000000001/sv_1 v89>,<0x10000000000/qa v37>,<0x1/volumes v70>]//[]","error_str":"On-disk backtrace is divergent or newer"},"raw_stats":{"checked":false,"passed":false,"read_ret_val":0,"ondisk_value.dirstat":"f()","ondisk_value.rstat":"n()","memory_value.dirstat":"f()","memory_value.rstat":"n()","error_str":""},"return_code":0}
2024-04-02T00:47:36.235 INFO:journalctl@ceph.mds.k.smithi150.stdout:Apr 02 00:47:35 smithi150 ceph-cf707428-f088-11ee-b647-cb9ed24678a4-mds-k[66111]: 2024-04-02T00:47:35.892+0000 7f405285e700 -1 mds.3.scrubstack _validate_inode_done scrub error on inode [inode 0x10000000224 [4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORDPRO/NEWS1_1.LWP auth v541 ap=1 s=260096 n(v0 rc2024-04-02T00:46:56.227934+0000 b260096 1=1+0) (iauth excl) (ifile excl) (ixattr excl) cr={25201=0-4194304@3} caps={25201=pAsxLsXsxFsxcrwb/pAsxXsxFsxcrwb@51},l=25201 | ptrwaiter=0 lock=0 importingcaps=0 caps=1 replicated=0 dirty=1 authpin=1 scrubqueue=0 0x55984668eb00]: {"performed_validation":true,"passed_validation":false,"backtrace":{"checked":true,"passed":false,"read_ret_val":0,"ondisk_value":"(3)0x10000000224:[<0x10000000221/NEWS1_1.LWP v6>,<0x10000000009/WORDPRO v3954>,<0x10000000008/~dmtmp v49>,<0x10000000007/client0 v33>,<0x10000000006/clients v23>,<0x10000000005/tmp v23>,<0x100000001fb/client.0 v24>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v40>,<0x10000000001/sv_1 v61>,<0x10000000000/qa v49>,<0x1/volumes v44>]//[]","memoryvalue":"(3)0x10000000224:[<0x10000000221/NEWS1_1.LWP v541>,<0x10000000009/WORDPRO v4818>,<0x10000000008/~dmtmp v379>,<0x10000000007/client0 v88>,<0x10000000006/clients v57>,<0x10000000005/tmp v57>,<0x100000001fb/client.0 v52>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v26>,<0x10000000001/sv_1 v89>,<0x10000000000/qa v37>,<0x1/volumes v70>]//[]","error_str":"On-disk backtrace is divergent or newer"},"raw_stats":{"checked":false,"passed":false,"read_ret_val":0,"ondisk_value.dirstat":"f()","ondisk_value.rstat":"n()","memory_value.dirstat":"f()","memory_value.rstat":"n()","error_str":""},"return_code":0}
2024-04-02T00:47:36.235 INFO:journalctl@ceph.mds.h.smithi150.stdout:Apr 02 00:47:35 smithi150 ceph-cf707428-f088-11ee-b647-cb9ed24678a4-mds-h[66731]: 2024-04-02T00:47:35.895+0000 7f9654430700 -1 mds.2.scrubstack _validate_inode_done scrub error on inode [inode 0x10000000227 [4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/PM/EVENTS.DOC auth v952 ap=1 s=260096 n(v0 rc2024-04-02T00:46:56.234934+0000 b260096 1=1+0) (iauth excl) (ifile excl) (ixattr excl) cr={25201=0-4194304@3} caps={25201=pAsxLsXsxFsxcrwb/pAsxXsxFsxcrwb@98},l=25201 | ptrwaiter=0 lock=0 importingcaps=0 caps=1 replicated=0 dirty=1 authpin=1 scrubqueue=0 0x563b471ef600]: {"performed_validation":true,"passed_validation":false,"backtrace":{"checked":true,"passed":false,"read_ret_val":0,"ondisk_value":"(3)0x10000000227:[<0x10000000225/EVENTS.DOC v4>,<0x10000000009/PM v3958>,<0x10000000008/~dmtmp v49>,<0x10000000007/client0 v33>,<0x10000000006/clients v23>,<0x10000000005/tmp v23>,<0x100000001fb/client.0 v24>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v40>,<0x10000000001/sv_1 v61>,<0x10000000000/qa v49>,<0x1/volumes v44>]//[]","memoryvalue":"(3)0x10000000227:[<0x10000000225/EVENTS.DOC v952>,<0x10000000009/PM v4509>,<0x10000000008/~dmtmp v379>,<0x10000000007/client0 v88>,<0x10000000006/clients v57>,<0x10000000005/tmp v57>,<0x100000001fb/client.0 v52>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v26>,<0x10000000001/sv_1 v89>,<0x10000000000/qa v37>,<0x1/volumes v70>]//[]","error_str":"On-disk backtrace is divergent or newer"},"raw_stats":{"checked":false,"passed":false,"read_ret_val":0,"ondisk_value.dirstat":"f()","ondisk_value.rstat":"n()","memory_value.dirstat":"f()","memory_value.rstat":"n()","error_str":""},"return_code":0}
2024-04-02T00:47:36.235 INFO:journalctl@ceph.mds.h.smithi150.stdout:Apr 02 00:47:35 smithi150 ceph-cf707428-f088-11ee-b647-cb9ed24678a4-mds-h[66731]: 2024-04-02T00:47:35.896+0000 7f9654430700 -1 mds.2.scrubstack _validate_inode_done scrub error on inode [inode 0x10000000228 [4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/PM/MOVED.DOC auth v954 ap=1 s=260096 n(v0 rc2024-04-02T00:46:56.239934+0000 b260096 1=1+0) (iauth excl) (ifile excl) (ixattr excl) cr={25201=0-4194304@3} caps={25201=pAsxLsXsxFsxcrwb/pAsxXsxFsxcrwb@98},l=25201 | ptrwaiter=0 lock=0 importingcaps=0 caps=1 replicated=0 dirty=1 authpin=1 scrubqueue=0 0x563b471efb80]: {"performed_validation":true,"passed_validation":false,"backtrace":{"checked":true,"passed":false,"read_ret_val":0,"ondisk_value":"(3)0x10000000228:[<0x10000000225/MOVED.DOC v6>,<0x10000000009/PM v3958>,<0x10000000008/~dmtmp v49>,<0x10000000007/client0 v33>,<0x10000000006/clients v23>,<0x10000000005/tmp v23>,<0x100000001fb/client.0 v24>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v40>,<0x10000000001/sv_1 v61>,<0x10000000000/qa v49>,<0x1/volumes v44>]//[]","memoryvalue":"(3)0x10000000228:[<0x10000000225/MOVED.DOC v954>,<0x10000000009/PM v4509>,<0x10000000008/~dmtmp v379>,<0x10000000007/client0 v88>,<0x10000000006/clients v57>,<0x10000000005/tmp v57>,<0x100000001fb/client.0 v52>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v26>,<0x10000000001/sv_1 v89>,<0x10000000000/qa v37>,<0x1/volumes v70>]//[]","error_str":"On-disk backtrace is divergent or newer"},"raw_stats":{"checked":false,"passed":false,"read_ret_val":0,"ondisk_value.dirstat":"f()","ondisk_value.rstat":"n()","memory_value.dirstat":"f()","memory_value.rstat":"n()","error_str":""},"return_code":0}
2024-04-02T00:47:36.235 INFO:journalctl@ceph.mds.h.smithi150.stdout:Apr 02 00:47:35 smithi150 ceph-cf707428-f088-11ee-b647-cb9ed24678a4-mds-h[66731]: 2024-04-02T00:47:35.904+0000 7f9654430700 -1 mds.2.scrubstack _validate_inode_done scrub error on inode [inode 0x1000000023f [4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/ACCESS/FASTENER.MDB auth v283 ap=1 s=260096 n(v0 rc2024-04-02T00:47:35.876265+0000 b260096 1=1+0) (iauth excl) (ifile excl) (ixattr excl) cr={25201=0-4194304@3} caps={25201=pAsxLsXsxFsxcrwb/pAsxXsxFsxcrwb@13},l=25201 | ptrwaiter=0 lock=0 importingcaps=0 caps=1 replicated=0 dirty=1 authpin=1 scrubqueue=0 0x563b471f0680]: {"performed_validation":true,"passed_validation":false,"backtrace":{"checked":true,"passed":false,"read_ret_val":0,"ondisk_value":"(3)0x1000000023f:[<0x1000000023e/FASTENER.MDB v176>,<0x10000000009/ACCESS v3986>,<0x10000000008/~dmtmp v49>,<0x10000000007/client0 v33>,<0x10000000006/clients v23>,<0x10000000005/tmp v23>,<0x100000001fb/client.0 v24>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v40>,<0x10000000001/sv_1 v61>,<0x10000000000/qa v49>,<0x1/volumes v44>]//[]","memoryvalue":"(3)0x1000000023f:[<0x1000000023e/FASTENER.MDB v283>,<0x10000000009/ACCESS v4822>,<0x10000000008/~dmtmp v379>,<0x10000000007/client0 v88>,<0x10000000006/clients v57>,<0x10000000005/tmp v57>,<0x100000001fb/client.0 v52>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v26>,<0x10000000001/sv_1 v89>,<0x10000000000/qa v37>,<0x1/volumes v70>]//[]","error_str":"On-disk backtrace is divergent or newer"},"raw_stats":{"checked":false,"passed":false,"read_ret_val":0,"ondisk_value.dirstat":"f()","ondisk_value.rstat":"n()","memory_value.dirstat":"f()","memory_value.rstat":"n()","error_str":""},"return_code":0}
2024-04-02T00:47:36.526 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:36 smithi110 ceph-mon[32139]: pgmap v56: 97 pgs: 97 active+clean; 28 MiB data, 518 MiB used, 1.0 TiB / 1.0 TiB avail; 221 KiB/s rd, 321 KiB/s wr, 86 op/s
2024-04-02T00:47:37.066 INFO:tasks.workunit.client.0.smithi110.stdout: 1 26095 3.83 MB/sec warmup 41 sec latency 12.691 ms
2024-04-02T00:47:37.424 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: scrub queued for path: /
2024-04-02T00:47:37.424 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.424 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: scrub summary: active paths [/]
2024-04-02T00:47:37.424 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: scrub summary: active paths [/]
2024-04-02T00:47:37.425 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: scrub summary: active paths [/]
2024-04-02T00:47:37.425 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: scrub summary: active paths [/]
2024-04-02T00:47:37.425 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: Scrub error on inode 0x1000000020d (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/EXCEL/PCMAGCD.XLS) see mds.k log and `damage ls` output for details
2024-04-02T00:47:37.425 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: Scrub error on inode 0x1000000020a (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORD/BASEMACH.DOC) see mds.h log and `damage ls` output for details
2024-04-02T00:47:37.425 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: Scrub error on inode 0x1000000020e (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/EXCEL/SALES.XLS) see mds.k log and `damage ls` output for details
2024-04-02T00:47:37.425 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: Scrub error on inode 0x1000000020b (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORD/FACTS.DOC) see mds.h log and `damage ls` output for details
2024-04-02T00:47:37.425 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: Scrub error on inode 0x10000000242 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/SEED/MEDIUM.FIL) see mds.i log and `damage ls` output for details
2024-04-02T00:47:37.425 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: Scrub error on inode 0x10000000243 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/SEED/LARGE.FIL) see mds.i log and `damage ls` output for details
2024-04-02T00:47:37.425 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: Scrub error on inode 0x10000000241 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/SEED/SMALL.FIL) see mds.i log and `damage ls` output for details
2024-04-02T00:47:37.425 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.425 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: Scrub error on inode 0x10000000209 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORD/CHAP10.DOC) see mds.h log and `damage ls` output for details
2024-04-02T00:47:37.425 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.425 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: Scrub error on inode 0x10000000222 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORDPRO/BENCHS.LWP) see mds.k log and `damage ls` output for details
2024-04-02T00:47:37.425 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: Scrub error on inode 0x10000000223 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORDPRO/RESULTS.XLS) see mds.k log and `damage ls` output for details
2024-04-02T00:47:37.426 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: Scrub error on inode 0x10000000224 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORDPRO/NEWS1_1.LWP) see mds.k log and `damage ls` output for details
2024-04-02T00:47:37.426 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.426 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: scrub summary: active paths [/]
2024-04-02T00:47:37.426 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: Scrub error on inode 0x10000000226 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/PM/T1.XLS) see mds.h log and `damage ls` output for details
2024-04-02T00:47:37.426 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: Scrub error on inode 0x10000000227 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/PM/EVENTS.DOC) see mds.h log and `damage ls` output for details
2024-04-02T00:47:37.426 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: Scrub error on inode 0x10000000228 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/PM/MOVED.DOC) see mds.h log and `damage ls` output for details
2024-04-02T00:47:37.426 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.426 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: scrub summary: active paths [/]
2024-04-02T00:47:37.426 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.426 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: scrub summary: active paths [/]
2024-04-02T00:47:37.426 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: Scrub error on inode 0x1000000023f (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/ACCESS/FASTENER.MDB) see mds.h log and `damage ls` output for details
2024-04-02T00:47:37.426 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.426 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:37 smithi181 ceph-mon[36358]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.484 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: scrub queued for path: /
2024-04-02T00:47:37.484 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.484 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: scrub summary: active paths [/]
2024-04-02T00:47:37.484 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: scrub summary: active paths [/]
2024-04-02T00:47:37.485 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: scrub summary: active paths [/]
2024-04-02T00:47:37.485 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: scrub summary: active paths [/]
2024-04-02T00:47:37.485 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: Scrub error on inode 0x1000000020d (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/EXCEL/PCMAGCD.XLS) see mds.k log and `damage ls` output for details
2024-04-02T00:47:37.485 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: Scrub error on inode 0x1000000020a (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORD/BASEMACH.DOC) see mds.h log and `damage ls` output for details
2024-04-02T00:47:37.485 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: Scrub error on inode 0x1000000020e (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/EXCEL/SALES.XLS) see mds.k log and `damage ls` output for details
2024-04-02T00:47:37.485 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: Scrub error on inode 0x1000000020b (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORD/FACTS.DOC) see mds.h log and `damage ls` output for details
2024-04-02T00:47:37.485 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: Scrub error on inode 0x10000000242 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/SEED/MEDIUM.FIL) see mds.i log and `damage ls` output for details
2024-04-02T00:47:37.485 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: Scrub error on inode 0x10000000243 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/SEED/LARGE.FIL) see mds.i log and `damage ls` output for details
2024-04-02T00:47:37.486 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: Scrub error on inode 0x10000000241 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/SEED/SMALL.FIL) see mds.i log and `damage ls` output for details
2024-04-02T00:47:37.486 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.486 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: Scrub error on inode 0x10000000209 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORD/CHAP10.DOC) see mds.h log and `damage ls` output for details
2024-04-02T00:47:37.486 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.486 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: Scrub error on inode 0x10000000222 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORDPRO/BENCHS.LWP) see mds.k log and `damage ls` output for details
2024-04-02T00:47:37.486 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: Scrub error on inode 0x10000000223 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORDPRO/RESULTS.XLS) see mds.k log and `damage ls` output for details
2024-04-02T00:47:37.486 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: Scrub error on inode 0x10000000224 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORDPRO/NEWS1_1.LWP) see mds.k log and `damage ls` output for details
2024-04-02T00:47:37.486 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.486 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: scrub summary: active paths [/]
2024-04-02T00:47:37.486 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: Scrub error on inode 0x10000000226 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/PM/T1.XLS) see mds.h log and `damage ls` output for details
2024-04-02T00:47:37.487 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: Scrub error on inode 0x10000000227 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/PM/EVENTS.DOC) see mds.h log and `damage ls` output for details
2024-04-02T00:47:37.487 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: Scrub error on inode 0x10000000228 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/PM/MOVED.DOC) see mds.h log and `damage ls` output for details
2024-04-02T00:47:37.487 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.487 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: scrub summary: active paths [/]
2024-04-02T00:47:37.487 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.487 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: scrub summary: active paths [/]
2024-04-02T00:47:37.487 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: Scrub error on inode 0x1000000023f (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/ACCESS/FASTENER.MDB) see mds.h log and `damage ls` output for details
2024-04-02T00:47:37.487 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.487 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:37 smithi150 ceph-mon[36451]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.526 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: scrub queued for path: /
2024-04-02T00:47:37.526 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.526 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: scrub summary: active paths [/]
2024-04-02T00:47:37.526 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: scrub summary: active paths [/]
2024-04-02T00:47:37.527 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: scrub summary: active paths [/]
2024-04-02T00:47:37.527 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: scrub summary: active paths [/]
2024-04-02T00:47:37.527 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: Scrub error on inode 0x1000000020d (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/EXCEL/PCMAGCD.XLS) see mds.k log and `damage ls` output for details
2024-04-02T00:47:37.527 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: Scrub error on inode 0x1000000020a (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORD/BASEMACH.DOC) see mds.h log and `damage ls` output for details
2024-04-02T00:47:37.527 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: Scrub error on inode 0x1000000020e (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/EXCEL/SALES.XLS) see mds.k log and `damage ls` output for details
2024-04-02T00:47:37.527 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: Scrub error on inode 0x1000000020b (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORD/FACTS.DOC) see mds.h log and `damage ls` output for details
2024-04-02T00:47:37.527 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: Scrub error on inode 0x10000000242 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/SEED/MEDIUM.FIL) see mds.i log and `damage ls` output for details
2024-04-02T00:47:37.527 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: Scrub error on inode 0x10000000243 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/SEED/LARGE.FIL) see mds.i log and `damage ls` output for details
2024-04-02T00:47:37.527 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: Scrub error on inode 0x10000000241 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/SEED/SMALL.FIL) see mds.i log and `damage ls` output for details
2024-04-02T00:47:37.527 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.527 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: Scrub error on inode 0x10000000209 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORD/CHAP10.DOC) see mds.h log and `damage ls` output for details
2024-04-02T00:47:37.527 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.527 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: Scrub error on inode 0x10000000222 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORDPRO/BENCHS.LWP) see mds.k log and `damage ls` output for details
2024-04-02T00:47:37.528 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: Scrub error on inode 0x10000000223 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORDPRO/RESULTS.XLS) see mds.k log and `damage ls` output for details
2024-04-02T00:47:37.528 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: Scrub error on inode 0x10000000224 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORDPRO/NEWS1_1.LWP) see mds.k log and `damage ls` output for details
2024-04-02T00:47:37.528 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.528 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: scrub summary: active paths [/]
2024-04-02T00:47:37.528 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: Scrub error on inode 0x10000000226 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/PM/T1.XLS) see mds.h log and `damage ls` output for details
2024-04-02T00:47:37.528 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: Scrub error on inode 0x10000000227 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/PM/EVENTS.DOC) see mds.h log and `damage ls` output for details
2024-04-02T00:47:37.528 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: Scrub error on inode 0x10000000228 (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/PM/MOVED.DOC) see mds.h log and `damage ls` output for details
2024-04-02T00:47:37.528 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.528 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: scrub summary: active paths [/]
2024-04-02T00:47:37.528 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.529 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: scrub summary: active paths [/]
2024-04-02T00:47:37.529 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:37.529 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:37 smithi110 ceph-mon[32139]: scrub summary: idle+waiting paths [/]
2024-04-02T00:47:38.066 INFO:tasks.workunit.client.0.smithi110.stdout: 1 28556 4.11 MB/sec warmup 42 sec latency 17.284 ms
2024-04-02T00:47:38.384 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:38 smithi150 ceph-mon[36451]: pgmap v57: 97 pgs: 97 active+clean; 28 MiB data, 518 MiB used, 1.0 TiB / 1.0 TiB avail; 74 KiB/s rd, 182 KiB/s wr, 37 op/s
2024-04-02T00:47:38.423 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:38 smithi181 ceph-mon[36358]: pgmap v57: 97 pgs: 97 active+clean; 28 MiB data, 518 MiB used, 1.0 TiB / 1.0 TiB avail; 74 KiB/s rd, 182 KiB/s wr, 37 op/s
2024-04-02T00:47:38.526 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:38 smithi110 ceph-mon[32139]: pgmap v57: 97 pgs: 97 active+clean; 28 MiB data, 518 MiB used, 1.0 TiB / 1.0 TiB avail; 74 KiB/s rd, 182 KiB/s wr, 37 op/s
2024-04-02T00:47:39.066 INFO:tasks.workunit.client.0.smithi110.stdout: 1 29580 4.09 MB/sec warmup 43 sec latency 11.387 ms
2024-04-02T00:47:40.066 INFO:tasks.workunit.client.0.smithi110.stdout: 1 31945 4.35 MB/sec warmup 44 sec latency 15.675 ms
2024-04-02T00:47:40.423 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:40 smithi181 ceph-mon[36358]: pgmap v58: 97 pgs: 97 active+clean; 32 MiB data, 533 MiB used, 1.0 TiB / 1.0 TiB avail; 659 KiB/s rd, 566 KiB/s wr, 118 op/s
2024-04-02T00:47:40.482 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:40 smithi150 ceph-mon[36451]: pgmap v58: 97 pgs: 97 active+clean; 32 MiB data, 533 MiB used, 1.0 TiB / 1.0 TiB avail; 659 KiB/s rd, 566 KiB/s wr, 118 op/s
2024-04-02T00:47:40.526 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:40 smithi110 ceph-mon[32139]: pgmap v58: 97 pgs: 97 active+clean; 32 MiB data, 533 MiB used, 1.0 TiB / 1.0 TiB avail; 659 KiB/s rd, 566 KiB/s wr, 118 op/s
2024-04-02T00:47:41.066 INFO:tasks.workunit.client.0.smithi110.stdout: 1 33057 4.33 MB/sec warmup 45 sec latency 13.766 ms
2024-04-02T00:47:41.423 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:41 smithi181 ceph-mon[36358]: Health check failed: 1 MDSs report damaged metadata (MDS_DAMAGE)
2024-04-02T00:47:41.423 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:41 smithi181 ceph-mon[36358]: mds.? [v2:172.21.15.150:6836/1118918453,v1:172.21.15.150:6837/1118918453] up:active
2024-04-02T00:47:41.424 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:41 smithi181 ceph-mon[36358]: fsmap cephfs:5 5 up:standby-replay 2 up:standby 5 up:active
2024-04-02T00:47:41.482 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:41 smithi150 ceph-mon[36451]: Health check failed: 1 MDSs report damaged metadata (MDS_DAMAGE)
2024-04-02T00:47:41.483 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:41 smithi150 ceph-mon[36451]: mds.? [v2:172.21.15.150:6836/1118918453,v1:172.21.15.150:6837/1118918453] up:active
2024-04-02T00:47:41.483 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:41 smithi150 ceph-mon[36451]: fsmap cephfs:5 5 up:standby-replay 2 up:standby 5 up:active
2024-04-02T00:47:41.526 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:41 smithi110 ceph-cf707428-f088-11ee-b647-cb9ed24678a4-mon-a[32135]: 2024-04-02T00:47:41.102+0000 7f4f6081c700 -1 log_channel(cluster) log [ERR] : Health check failed: 1 MDSs report damaged metadata (MDS_DAMAGE)
2024-04-02T00:47:41.526 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:41 smithi110 ceph-mon[32139]: Health check failed: 1 MDSs report damaged metadata (MDS_DAMAGE)
2024-04-02T00:47:41.526 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:41 smithi110 ceph-mon[32139]: mds.? [v2:172.21.15.150:6836/1118918453,v1:172.21.15.150:6837/1118918453] up:active
2024-04-02T00:47:41.526 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:41 smithi110 ceph-mon[32139]: fsmap cephfs:5 5 up:standby-replay 2 up:standby 5 up:active
2024-04-02T00:47:42.070 INFO:tasks.workunit.client.0.smithi110.stdout: 1 35340 4.51 MB/sec warmup 46 sec latency 40.098 ms
2024-04-02T00:47:42.423 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:42 smithi181 ceph-mon[36358]: pgmap v59: 97 pgs: 97 active+clean; 32 MiB data, 533 MiB used, 1.0 TiB / 1.0 TiB avail; 641 KiB/s rd, 465 KiB/s wr, 107 op/s
2024-04-02T00:47:42.482 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:42 smithi150 ceph-mon[36451]: pgmap v59: 97 pgs: 97 active+clean; 32 MiB data, 533 MiB used, 1.0 TiB / 1.0 TiB avail; 641 KiB/s rd, 465 KiB/s wr, 107 op/s
2024-04-02T00:47:42.526 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:42 smithi110 ceph-mon[32139]: pgmap v59: 97 pgs: 97 active+clean; 32 MiB data, 533 MiB used, 1.0 TiB / 1.0 TiB avail; 641 KiB/s rd, 465 KiB/s wr, 107 op/s
2024-04-02T00:47:43.066 INFO:tasks.workunit.client.0.smithi110.stdout: 1 36288 4.52 MB/sec warmup 47 sec latency 71.236 ms
2024-04-02T00:47:43.423 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:43 smithi181 ceph-mon[36358]: mds.? [v2:172.21.15.150:6832/3824112962,v1:172.21.15.150:6833/3824112962] up:active
2024-04-02T00:47:43.436 INFO:journalctl@ceph.mon.c.smithi181.stdout:Apr 02 00:47:43 smithi181 ceph-mon[36358]: fsmap cephfs:5 5 up:standby-replay 2 up:standby 5 up:active
2024-04-02T00:47:43.482 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:43 smithi150 ceph-mon[36451]: mds.? [v2:172.21.15.150:6832/3824112962,v1:172.21.15.150:6833/3824112962] up:active
2024-04-02T00:47:43.483 INFO:journalctl@ceph.mon.b.smithi150.stdout:Apr 02 00:47:43 smithi150 ceph-mon[36451]: fsmap cephfs:5 5 up:standby-replay 2 up:standby 5 up:active
2024-04-02T00:47:43.526 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:43 smithi110 ceph-mon[32139]: mds.? [v2:172.21.15.150:6832/3824112962,v1:172.21.15.150:6833/3824112962] up:active
2024-04-02T00:47:43.526 INFO:journalctl@ceph.mon.a.smithi110.stdout:Apr 02 00:47:43 smithi110 ceph-mon[32139]: fsmap cephfs:5 5 up:standby-replay 2 up:standby 5 up:active
2024-04-02T00:47:44.066 INFO:tasks.workunit.client.0.smithi110.stdout: 1 38488 4.68 MB/sec warmup 48 sec latency 16.134 ms
...
2024-04-02T00:48:06.865 INFO:teuthology.orchestra.run.smithi110.stderr:2024-04-02T00:48:06.853+0000 7f968102f740 1 -- 172.21.15.110:0/2904592422 shutdown_connections
2024-04-02T00:48:06.865 INFO:teuthology.orchestra.run.smithi110.stderr:2024-04-02T00:48:06.853+0000 7f968102f740 1 -- 172.21.15.110:0/2904592422 wait complete.
2024-04-02T00:48:06.865 INFO:teuthology.orchestra.run.smithi110.stderr:2024-04-02T00:48:06.853+0000 7f968102f740 20 client.16278 trim_cache size 0 max 16384
2024-04-02T00:48:06.865 INFO:teuthology.orchestra.run.smithi110.stdout:[
2024-04-02T00:48:06.865 INFO:teuthology.orchestra.run.smithi110.stdout: {
2024-04-02T00:48:06.865 INFO:teuthology.orchestra.run.smithi110.stdout: "damage_type": "backtrace",
2024-04-02T00:48:06.865 INFO:teuthology.orchestra.run.smithi110.stdout: "id": 327454483,
2024-04-02T00:48:06.865 INFO:teuthology.orchestra.run.smithi110.stdout: "ino": 1099511628323,
2024-04-02T00:48:06.865 INFO:teuthology.orchestra.run.smithi110.stdout: "path": "/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORDPRO/RESULTS.XLS"
2024-04-02T00:48:06.866 INFO:teuthology.orchestra.run.smithi110.stdout: },
2024-04-02T00:48:06.866 INFO:teuthology.orchestra.run.smithi110.stdout: {
2024-04-02T00:48:06.866 INFO:teuthology.orchestra.run.smithi110.stdout: "damage_type": "backtrace",
2024-04-02T00:48:06.866 INFO:teuthology.orchestra.run.smithi110.stdout: "id": 950104272,
2024-04-02T00:48:06.866 INFO:teuthology.orchestra.run.smithi110.stdout: "ino": 1099511628322,
2024-04-02T00:48:06.866 INFO:teuthology.orchestra.run.smithi110.stdout: "path": "/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORDPRO/BENCHS.LWP"
2024-04-02T00:48:06.866 INFO:teuthology.orchestra.run.smithi110.stdout: },
2024-04-02T00:48:06.866 INFO:teuthology.orchestra.run.smithi110.stdout: {
2024-04-02T00:48:06.866 INFO:teuthology.orchestra.run.smithi110.stdout: "damage_type": "backtrace",
2024-04-02T00:48:06.866 INFO:teuthology.orchestra.run.smithi110.stdout: "id": 2175666956,
2024-04-02T00:48:06.866 INFO:teuthology.orchestra.run.smithi110.stdout: "ino": 1099511628301,
2024-04-02T00:48:06.866 INFO:teuthology.orchestra.run.smithi110.stdout: "path": "/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/EXCEL/PCMAGCD.XLS"
2024-04-02T00:48:06.866 INFO:teuthology.orchestra.run.smithi110.stdout: },
2024-04-02T00:48:06.867 INFO:teuthology.orchestra.run.smithi110.stdout: {
2024-04-02T00:48:06.867 INFO:teuthology.orchestra.run.smithi110.stdout: "damage_type": "backtrace",
2024-04-02T00:48:06.867 INFO:teuthology.orchestra.run.smithi110.stdout: "id": 2988204094,
2024-04-02T00:48:06.867 INFO:teuthology.orchestra.run.smithi110.stdout: "ino": 1099511628302,
2024-04-02T00:48:06.867 INFO:teuthology.orchestra.run.smithi110.stdout: "path": "/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/EXCEL/SALES.XLS"
2024-04-02T00:48:06.867 INFO:teuthology.orchestra.run.smithi110.stdout: },
2024-04-02T00:48:06.867 INFO:teuthology.orchestra.run.smithi110.stdout: {
2024-04-02T00:48:06.867 INFO:teuthology.orchestra.run.smithi110.stdout: "damage_type": "backtrace",
2024-04-02T00:48:06.867 INFO:teuthology.orchestra.run.smithi110.stdout: "id": 3367085732,
2024-04-02T00:48:06.867 INFO:teuthology.orchestra.run.smithi110.stdout: "ino": 1099511628324,
2024-04-02T00:48:06.867 INFO:teuthology.orchestra.run.smithi110.stdout: "path": "/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORDPRO/NEWS1_1.LWP"
2024-04-02T00:48:06.867 INFO:teuthology.orchestra.run.smithi110.stdout: }
2024-04-02T00:48:06.868 INFO:teuthology.orchestra.run.smithi110.stdout:]
...
2024-04-02T00:48:07.010 INFO:teuthology.orchestra.run.smithi110.stderr:2024-04-02T00:48:07.000+0000 7f59569f3740 20 client.16296 trim_cache size 0 max 16384
2024-04-02T00:48:07.010 INFO:teuthology.orchestra.run.smithi110.stdout:[
2024-04-02T00:48:07.011 INFO:teuthology.orchestra.run.smithi110.stdout: {
2024-04-02T00:48:07.011 INFO:teuthology.orchestra.run.smithi110.stdout: "damage_type": "backtrace",
2024-04-02T00:48:07.011 INFO:teuthology.orchestra.run.smithi110.stdout: "id": 66189433,
2024-04-02T00:48:07.011 INFO:teuthology.orchestra.run.smithi110.stdout: "ino": 1099511628298,
2024-04-02T00:48:07.011 INFO:teuthology.orchestra.run.smithi110.stdout: "path": "/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORD/BASEMACH.DOC"
2024-04-02T00:48:07.011 INFO:teuthology.orchestra.run.smithi110.stdout: },
2024-04-02T00:48:07.011 INFO:teuthology.orchestra.run.smithi110.stdout: {
2024-04-02T00:48:07.011 INFO:teuthology.orchestra.run.smithi110.stdout: "damage_type": "backtrace",
2024-04-02T00:48:07.011 INFO:teuthology.orchestra.run.smithi110.stdout: "id": 1603758148,
2024-04-02T00:48:07.011 INFO:teuthology.orchestra.run.smithi110.stdout: "ino": 1099511628299,
2024-04-02T00:48:07.011 INFO:teuthology.orchestra.run.smithi110.stdout: "path": "/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORD/FACTS.DOC"
2024-04-02T00:48:07.011 INFO:teuthology.orchestra.run.smithi110.stdout: },
2024-04-02T00:48:07.011 INFO:teuthology.orchestra.run.smithi110.stdout: {
2024-04-02T00:48:07.012 INFO:teuthology.orchestra.run.smithi110.stdout: "damage_type": "backtrace",
2024-04-02T00:48:07.012 INFO:teuthology.orchestra.run.smithi110.stdout: "id": 2946528560,
2024-04-02T00:48:07.012 INFO:teuthology.orchestra.run.smithi110.stdout: "ino": 1099511628326,
2024-04-02T00:48:07.012 INFO:teuthology.orchestra.run.smithi110.stdout: "path": "/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/PM/T1.XLS"
2024-04-02T00:48:07.012 INFO:teuthology.orchestra.run.smithi110.stdout: },
2024-04-02T00:48:07.012 INFO:teuthology.orchestra.run.smithi110.stdout: {
2024-04-02T00:48:07.012 INFO:teuthology.orchestra.run.smithi110.stdout: "damage_type": "backtrace",
2024-04-02T00:48:07.012 INFO:teuthology.orchestra.run.smithi110.stdout: "id": 3513365793,
2024-04-02T00:48:07.012 INFO:teuthology.orchestra.run.smithi110.stdout: "ino": 1099511628297,
2024-04-02T00:48:07.012 INFO:teuthology.orchestra.run.smithi110.stdout: "path": "/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/WORD/CHAP10.DOC"
2024-04-02T00:48:07.012 INFO:teuthology.orchestra.run.smithi110.stdout: },
2024-04-02T00:48:07.012 INFO:teuthology.orchestra.run.smithi110.stdout: {
2024-04-02T00:48:07.012 INFO:teuthology.orchestra.run.smithi110.stdout: "damage_type": "backtrace",
2024-04-02T00:48:07.013 INFO:teuthology.orchestra.run.smithi110.stdout: "id": 3602428130,
2024-04-02T00:48:07.013 INFO:teuthology.orchestra.run.smithi110.stdout: "ino": 1099511628328,
2024-04-02T00:48:07.013 INFO:teuthology.orchestra.run.smithi110.stdout: "path": "/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/PM/MOVED.DOC"
2024-04-02T00:48:07.013 INFO:teuthology.orchestra.run.smithi110.stdout: },
2024-04-02T00:48:07.013 INFO:teuthology.orchestra.run.smithi110.stdout: {
2024-04-02T00:48:07.013 INFO:teuthology.orchestra.run.smithi110.stdout: "damage_type": "backtrace",
2024-04-02T00:48:07.013 INFO:teuthology.orchestra.run.smithi110.stdout: "id": 3749816280,
2024-04-02T00:48:07.013 INFO:teuthology.orchestra.run.smithi110.stdout: "ino": 1099511628327,
2024-04-02T00:48:07.013 INFO:teuthology.orchestra.run.smithi110.stdout: "path": "/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/PM/EVENTS.DOC"
2024-04-02T00:48:07.013 INFO:teuthology.orchestra.run.smithi110.stdout: },
2024-04-02T00:48:07.013 INFO:teuthology.orchestra.run.smithi110.stdout: {
2024-04-02T00:48:07.013 INFO:teuthology.orchestra.run.smithi110.stdout: "damage_type": "backtrace",
2024-04-02T00:48:07.014 INFO:teuthology.orchestra.run.smithi110.stdout: "id": 4266072862,
2024-04-02T00:48:07.014 INFO:teuthology.orchestra.run.smithi110.stdout: "ino": 1099511628351,
2024-04-02T00:48:07.014 INFO:teuthology.orchestra.run.smithi110.stdout: "path": "/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/ACCESS/FASTENER.MDB"
2024-04-02T00:48:07.014 INFO:teuthology.orchestra.run.smithi110.stdout: }
2024-04-02T00:48:07.014 INFO:teuthology.orchestra.run.smithi110.stdout:]
...
2024-04-02T00:48:07.160 INFO:teuthology.orchestra.run.smithi110.stdout:[
2024-04-02T00:48:07.160 INFO:teuthology.orchestra.run.smithi110.stdout: {
2024-04-02T00:48:07.160 INFO:teuthology.orchestra.run.smithi110.stdout: "damage_type": "backtrace",
2024-04-02T00:48:07.160 INFO:teuthology.orchestra.run.smithi110.stdout: "id": 619542657,
2024-04-02T00:48:07.160 INFO:teuthology.orchestra.run.smithi110.stdout: "ino": 1099511628353,
2024-04-02T00:48:07.160 INFO:teuthology.orchestra.run.smithi110.stdout: "path": "/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/SEED/SMALL.FIL"
2024-04-02T00:48:07.160 INFO:teuthology.orchestra.run.smithi110.stdout: },
2024-04-02T00:48:07.161 INFO:teuthology.orchestra.run.smithi110.stdout: {
2024-04-02T00:48:07.161 INFO:teuthology.orchestra.run.smithi110.stdout: "damage_type": "backtrace",
2024-04-02T00:48:07.161 INFO:teuthology.orchestra.run.smithi110.stdout: "id": 876871000,
2024-04-02T00:48:07.161 INFO:teuthology.orchestra.run.smithi110.stdout: "ino": 1099511628355,
2024-04-02T00:48:07.161 INFO:teuthology.orchestra.run.smithi110.stdout: "path": "/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/SEED/LARGE.FIL"
2024-04-02T00:48:07.161 INFO:teuthology.orchestra.run.smithi110.stdout: },
2024-04-02T00:48:07.161 INFO:teuthology.orchestra.run.smithi110.stdout: {
2024-04-02T00:48:07.161 INFO:teuthology.orchestra.run.smithi110.stdout: "damage_type": "backtrace",
2024-04-02T00:48:07.161 INFO:teuthology.orchestra.run.smithi110.stdout: "id": 3343540341,
2024-04-02T00:48:07.161 INFO:teuthology.orchestra.run.smithi110.stdout: "ino": 1099511628354,
2024-04-02T00:48:07.161 INFO:teuthology.orchestra.run.smithi110.stdout: "path": "/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/SEED/MEDIUM.FIL"
2024-04-02T00:48:07.161 INFO:teuthology.orchestra.run.smithi110.stdout: }
2024-04-02T00:48:07.161 INFO:teuthology.orchestra.run.smithi110.stdout:]
Updated by Xiubo Li almost 2 years ago · Edited
For example for the 0x100000001fb, which is /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/:
The scrub error's root cause is that when the auth mds commit the v40 dirty metadata to metadata pool in mds.b, the mds.b didn't notify the change to the replicas:
2024-04-02T00:47:05.087+0000 7fcbd6f51700 20 mds.0.cache unwritable parent nestlock (inest mix->lock(2) w=1 flushing), marking dirty rstat on [inode 0x100000001fb [...4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/ auth{1=1,2=1,3=1,4=1} v40 pv43 ap=1 f(v0 m2024-04-02T00:45:59.436893+0000 1=0+1)->f(v0 m2024-04-02T00:45:59.436893+0000 1=0+1) n(v0 rc2024-04-02T00:47:04.024803+0000 b345611 72=57+15)->n(v0 rc2024-04-02T00:47:04.024803+0000 b345611 72=57+15) old_inodes=2 (inest mix w=1) (iversion lock w=1) caps={25201=pAsLsXsFs/pAsLsXsFsx@10} mcw={4=AsLsXsFs} | request=0 lock=2 dirfrag=1 caps=1 dirtyparent=0 replicated=1 dirty=1 waiter=0 authpin=1 scrubqueue=0 0x562e778dd700]
2024-04-02T00:47:05.087+0000 7fcbd6f51700 10 mds.0.cache.ino(0x100000001fb) mark_dirty_rstat
2024-04-02T00:47:05.087+0000 7fcbd6f51700 10 mds.0.locker mark_updated_scatterlock (inest mix->lock(2) w=1 dirty flushing) - added at 2024-04-02T00:47:05.087646+0000
2024-04-02T00:47:05.087+0000 7fcbd6f51700 10 mds.0.cache predirty_journal_parents last prop 0.277977 < 1, stopping
2024-04-02T00:47:05.087+0000 7fcbd6f51700 10 mds.0.cache predirty_journal_parents stop. marking nestlock on [inode 0x100000001fa [...4,head] /volumes/qa/sv_1/ auth{1=1,2=1,3=1,4=1} v61 pv63 ap=1 snaprealm=0x562e77935440 f(v0 m2024-04-02T00:43:59.649199+0000 2=1+1)->f(v0 m2024-04-02T00:43:59.649199+0000 2=1+1) n(v0 rc2024-04-02T00:47:04.024803+0000 b345728 74=58+16)->n(v1 rc2024-04-02T00:47:04.024803+0000 b345728 74=58+16) old_inodes=2 (inest mix->lock(2) w=1 dirty flushing) caps={25201=pAsLsXsFs/-@1},l=25201 | dirtyscattered=1 request=0 lock=1 dirfrag=1 caps=1 dirtyparent=0 replicated=1 dirty=1 waiter=1 authpin=1 scrubqueue=0 0x562e776c4680]
2024-04-02T00:47:05.087+0000 7fcbd6f51700 10 mds.0.locker mark_updated_scatterlock (inest mix->lock(2) w=1 dirty flushing) - already on list since 2024-04-02T00:47:05.087646+0000
2024-04-02T00:47:05.087+0000 7fcbd6f51700 20 mds.0.journal EMetaBlob::add_dir_context(0x562e779c8d00) already have diri in this segment (2357 >= 2354), setting maybenot flag [inode 0x100000001fa [...4,head] /volumes/qa/sv_1/ auth{1=1,2=1,3=1,4=1} v61 pv63 ap=1 snaprealm=0x562e77935440 f(v0 m2024-04-02T00:43:59.649199+0000 2=1+1)->f(v0 m2024-04-02T00:43:59.649199+0000 2=1+1) n(v0 rc2024-04-02T00:47:04.024803+0000 b345728 74=58+16)->n(v1 rc2024-04-02T00:47:04.024803+0000 b345728 74=58+16) old_inodes=2 (inest mix->lock(2) w=1 dirty flushing) caps={25201=pAsLsXsFs/-@1},l=25201 | dirtyscattered=1 request=0 lock=1 dirfrag=1 caps=1 dirtyparent=0 replicated=1 dirty=1 waiter=1 authpin=1 scrubqueue=0 0x562e776c4680]
2024-04-02T00:47:05.087+0000 7fcbd6f51700 20 mds.0.journal EMetaBlob::add_dir_context(0x562e76a34d80) reached unambig auth subtree, don't need 0x562e7789e500,0x562e7789ea00,0x562e779e0280 at [dir 0x1 / [2,head] auth{1=1,2=1,3=1,4=1} v=45 cv=45/29 REP dir_auth=0 ap=1+0 state=1610874913|complete|committing f(v0 m2024-04-02T00:43:58.551324+0000 1=0+1) n(v2 rc2024-04-02T00:47:04.024803+0000 b345845 81=61+20) hs=1+0,ss=0+0 dirty=1 | child=1 subtree=1 replicated=1 dirty=1 authpin=1 scrubqueue=0 0x562e76a34d80]
2024-04-02T00:47:05.087+0000 7fcbd6f51700 20 mds.0.journal EMetaBlob::add_dir_context final:
2024-04-02T00:47:05.087+0000 7fcbd6f51700 10 mds.0.cache journal_cow_dentry follows head on [dentry #0x1/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25 [2,head] auth{1=1,2=1,3=1,4=1} (dversion lock) pv=43 v=40 ino=0x100000001fb state=1610612736 | request=0 lock=0 inodepin=1 replicated=1 dirty=1 authpin=0 0x562e779e0780]
2024-04-02T00:47:05.087+0000 7fcbd6f51700 10 mds.0.cache journal_cow_dentry follows 3 < first on [inode 0x100000001fb [...4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/ auth{1=1,2=1,3=1,4=1} v40 pv43 ap=1 f(v0 m2024-04-02T00:45:59.436893+0000 1=0+1)->f(v0 m2024-04-02T00:45:59.436893+0000 1=0+1) n(v0 rc2024-04-02T00:47:04.024803+0000 b345611 72=57+15)->n(v0 rc2024-04-02T00:47:04.024803+0000 b345611 72=57+15) old_inodes=2 (inest mix w=1) (iversion lock w=1) caps={25201=pAsLsXsFs/pAsLsXsFsx@10} mcw={4=AsLsXsFs} | request=0 lock=2 dirfrag=1 caps=1 dirtyrstat=1 dirtyparent=0 replicated=1 dirty=1 waiter=0 authpin=1 scrubqueue=0 0x562e778dd700]
2024-04-02T00:47:05.087+0000 7fcbd6f51700 10 mds.0.cache journal_cow_dentry follows head on [dentry #0x1/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0 [2,head] auth{1=1,2=1,3=1,4=1} (dversion lock) pv=26 v=24 ino=0x10000000005 state=1610612736 | request=0 lock=0 inodepin=1 replicated=1 dirty=1 authpin=0 0x562e779e0500]
2024-04-02T00:47:05.087+0000 7fcbd6f51700 10 mds.0.cache journal_cow_dentry follows 3 < first on [inode 0x10000000005 [...4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/ auth{1=1,2=1,3=1,4=1} v24 pv26 ap=1 f(v0 m2024-04-02T00:46:55.944939+0000 1=0+1)->f(v0 m2024-04-02T00:46:55.944939+0000 1=0+1) n(v1 rc2024-04-02T00:47:04.024803+0000 b345611 71=57+14)->n(v2 rc2024-04-02T00:47:04.024803+0000 b345611 71=57+14) old_inodes=2 (inest mix->lock(2) w=1 flushing) (ifile excl) caps={25201=pAsLsXsFsx/AsLsXsFsx@2},l=25201 | ptrwaiter=0 dirtyscattered=1 request=0 lock=1 dirfrag=1 caps=1 dirtyparent=0 replicated=1 dirty=1 waiter=1 authpin=1 scrubqueue=0 0x562e76965080]
2024-04-02T00:47:05.087+0000 7fcbd6f51700 10 mds.0.cache.ino(0x10000000005) finish_scatter_gather_update_accounted 256 on [inode 0x10000000005 [...4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/ auth{1=1,2=1,3=1,4=1} v24 pv26 ap=1 f(v0 m2024-04-02T00:46:55.944939+0000 1=0+1)->f(v0 m2024-04-02T00:46:55.944939+0000 1=0+1) n(v1 rc2024-04-02T00:47:04.024803+0000 b345611 71=57+14)->n(v2 rc2024-04-02T00:47:04.024803+0000 b345611 71=57+14) old_inodes=2 (inest mix->lock(2) w=1 flushing) (ifile excl) caps={25201=pAsLsXsFsx/AsLsXsFsx@2},l=25201 | ptrwaiter=0 dirtyscattered=1 request=0 lock=1 dirfrag=1 caps=1 dirtyparent=0 replicated=1 dirty=1 waiter=1 authpin=1 scrubqueue=0 0x562e76965080]
2024-04-02T00:47:05.087+0000 7fcbd6f51700 10 mds.0.cache.dir(0x10000000005) assimilate_dirty_rstat_inodes_finish
2024-04-02T00:47:05.087+0000 7fcbd6f51700 10 mds.0.cache.ino(0x10000000005) journaling updated frag accounted_ on [dir 0x10000000005 /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/ [2,head] auth{1=1,2=1,3=1,4=1} pv=25 v=24 cv=24/1 ap=2+2 state=1610874913|complete|committing f(v0 m2024-04-02T00:46:55.944939+0000 1=0+1)->f(v0 m2024-04-02T00:46:55.944939+0000 1=0+1) n(v1 rc2024-04-02T00:47:04.024803+0000 b345611 70=57+13)->n(v2 rc2024-04-02T00:47:04.024803+0000 b345611 70=57+13) hs=1+0,ss=0+0 dirty=1 | child=1 replicated=1 dirty=1 authpin=1 scrubqueue=0 0x562e77a0bf80]
2024-04-02T00:47:05.087+0000 7fcbd6f51700 20 mds.0.log _submit_entry EUpdate scatter_writebehind [metablob 0x100000001fa, 3 dirs]
2024-04-02T00:47:05.087+0000 7fcbd6f51700 1 -- [v2:172.21.15.150:6838/741083205,v1:172.21.15.150:6839/741083205] <== mds.3 v2:172.21.15.150:6832/3824112962 273 ==== lock(a=lockack inest 0x10000000007.head) ==== 164+0+0 (unknown 0 0 0) 0x562e7797d000 con 0x562e778e3800
2024-04-02T00:47:05.087+0000 7fcbd0744700 10 mds.0.cache.ino(0x10000000221) _commit_ops
...
2024-04-02T00:47:05.096+0000 7fcbd0744700 10 mds.0.cache.dir(0x100000001fb) _omap_commit_ops
2024-04-02T00:47:05.096+0000 7fcbd0744700 1 -- [v2:172.21.15.150:6838/741083205,v1:172.21.15.150:6839/741083205] --> [v2:172.21.15.181:6800/1286864011,v1:172.21.15.181:6801/1286864011] -- osd_op(unknown.0.14:2427 2.13 2:ca8fdbba:::100000001fb.00000000:head [stat,omap-set-header in=274b,omap-set-vals in=1654b] snapc 0=[] ondisk+write+known_if_redirected+full_force+supports_pool_eio e90) -- 0x562e780e0400 con 0x562e77900400
2024-04-02T00:47:05.096+0000 7fcbd0744700 10 mds.0.cache.dir(0x10000000005) _omap_commit_ops
...
Then around 30 seconds later in the replica mds.f's MDCache the version was still v26 when doing the scrub:
2024-04-02T00:47:35.860+0000 7f405906b700 7 mds.3.locker rdlock_finish on (isnap sync) on [inode 0x100000001fb [...3,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/ rep@0.1 v26 f(v0 m2024-04-02T00:45:59.436893+0000 1=0+1) n(v0 rc2024-04-02T00:46:57.355915+0000 b541755 99=84+15) old_inodes=1 (inest mix) | lock=0 dirfrag=1 0x5598465ffb80]
...
2024-04-02T00:47:35.888+0000 7f405285e700 0 log_channel(cluster) log [WRN] : Scrub error on inode 0x1000000020d (/volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/EXCEL/PCMAGCD.XLS) see mds.k log and `damage ls` output for details
2024-04-02T00:47:35.888+0000 7f405285e700 -1 mds.3.scrubstack _validate_inode_done scrub error on inode [inode 0x1000000020d [4,head] /volumes/qa/sv_1/13bb7c23-1dbe-485b-9237-964a34184b25/client.0/tmp/clients/client0/~dmtmp/EXCEL/PCMAGCD.XLS auth v729 ap=1 s=260096 n(v0 rc2024-04-02T00:47:35.386273+0000 b260096 1=1+0) (iauth excl) (ifile excl) (ixattr excl) cr={25201=0-4194304@3} caps={25201=pAsxLsXsxFsxcrwb/pAsxXsxFsxcrwb@52},l=25201 | ptrwaiter=0 request=0 lock=0 importingcaps=0 caps=1 replicated=0 dirty=1 waiter=0 authpin=1 scrubqueue=0 0x55984668f080]: {"performed_validation":true,"passed_validation":false,"backtrace":{"checked":true,"passed":false,"read_ret_val":0,"ondisk_value":"(3)0x1000000020d:[<0x1000000020c/PCMAGCD.XLS v2>,<0x10000000009/EXCEL v3910>,<0x10000000008/~dmtmp v49>,<0x10000000007/client0 v33>,<0x10000000006/clients v23>,<0x10000000005/tmp v23>,<0x100000001fb/client.0 v24>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v40>,<0x10000000001/sv_1 v61>,<0x10000000000/qa v49>,<0x1/volumes v44>]//[]","memoryvalue":"(3)0x1000000020d:[<0x1000000020c/PCMAGCD.XLS v729>,<0x10000000009/EXCEL v4776>,<0x10000000008/~dmtmp v379>,<0x10000000007/client0 v88>,<0x10000000006/clients v57>,<0x10000000005/tmp v57>,<0x100000001fb/client.0 v52>,<0x100000001fa/13bb7c23-1dbe-485b-9237-964a34184b25 v26>,<0x10000000001/sv_1 v89>,<0x10000000000/qa v37>,<0x1/volumes v70>]//[]","error_str":"On-disk backtrace is divergent or newer"},"raw_stats":{"checked":false,"passed":false,"read_ret_val":0,"ondisk_value.dirstat":"f()","ondisk_value.rstat":"n()","memory_value.dirstat":"f()","memory_value.rstat":"n()","error_str":""},"return_code":0}
...
I didn't find any mechanism will guarantee that the new version will be notified to replicas, but the scrub will assume the metadata was damaged.
Updated by Patrick Donnelly almost 2 years ago
The backtrace issue is fixed by https://github.com/ceph/ceph/pull/57354
Updated by Xiubo Li almost 2 years ago
- Project changed from Linux kernel client to CephFS
- Category deleted (
fs/ceph) - Status changed from In Progress to Duplicate
This should be duplicated with https://tracker.ceph.com/issues/64730, which has already been fixed by Venky. Thanks!
Patrick Donnelly wrote in #note-15:
The backtrace issue is fixed by https://github.com/ceph/ceph/pull/57354
Thanks Patrick to point out this. Will duplicate it.
Updated by Xiubo Li almost 2 years ago
- Status changed from Duplicate to In Progress
Patrick Donnelly wrote in #note-15:
The backtrace issue is fixed by https://github.com/ceph/ceph/pull/57354
The above fix will just treat the scrub as pass when any of the parent is non-auth, which looks not correct always. We should just ignore the non-auth CInodes and only compare the auth ones always.
I will use this tracker to fix it.
Updated by Xiubo Li almost 2 years ago
- Status changed from In Progress to Fix Under Review
- Pull request ID set to 57962
Updated by Xiubo Li almost 2 years ago · Edited
I have reproduced it by doing with the following debug patch and steps:
diff --git a/src/mds/CInode.cc b/src/mds/CInode.cc
index ed065b87e85..d23d354fe15 100644
--- a/src/mds/CInode.cc
+++ b/src/mds/CInode.cc
@@ -1323,6 +1323,7 @@ void CInode::build_backtrace(int64_t pool, inode_backtrace_t& bt)
while (pdn) {
CInode *diri = pdn->get_dir()->get_inode();
bt.ancestors.push_back(inode_backpointer_t(diri->ino(), pdn->get_name(), in->get_inode()->version, diri->is_auth()));
+ dout(0) << "build_backtrace " << *diri << " is_auth = " << diri->is_auth() << dendl;
in = diri;
pdn = in->get_parent_dn();
}
@@ -4762,6 +4763,8 @@ void CInode::validate_disk_state(CInode::validated_data *results,
}
bool _backtrace(int rval) {
+ JSONFormatter f;
+ CachedStackStringStream css;
// set up basic result reporting and make sure we got the data
results->performed_validation = true; // at least, some of it!
results->backtrace.checked = true;
@@ -4839,6 +4842,9 @@ void CInode::validate_disk_state(CInode::validated_data *results,
} else {
results->backtrace.passed = true;
}
+ results->dump(&f);
+ f.flush(*css);
+ dout(0) << __func__ << " scrub on inode " << *in << ": " << css->strv() << dendl;
next:
if (!results->backtrace.passed && in->scrub_infop->header->get_repair()) {
1, set max_mds to 2:
$ ./bin/ceph fs set a max_mds 2
2, set the following configs:
mds_bal_split_size = 100
mds_bal_merge_size = 50
mds_bal_replicate_threshold = 1
3, mount a kclient and then do:
mkdir mydir1/mydir2/mydir3/mydir4 -p
cd mydir1/mydir2/mydir3/mydir4
for i in {0..800}; do touch file$i; done
setfattr -n ceph.dir.pin -v 1 mydir1/mydir2/mydir3/mydir4
touch mydir1/mydir2/mydir3/mydir4/file900
mkdir mydir1/dir1
mkdir mydir1/dir2
mkdir mydir1/dir3
mkdir mydir1/dir4
4, flush the journal logs from mds.0 and mds.1:
$ ./bin/ceph tell mds.0 flush journal $ ./bin/ceph tell mds.1 flush journal
5, scrub /mydir1/mydir2/mydir3/mydir4/file900 on rank 0:
$ ./bin/ceph tell mds.0 scrub start /mydir1/subdir1/subdir2/file recursive
The scrub command will be forwarded to mds.1, because the file's auth mds is mds.1 and then in mds.1 log file:
From mds.0, we can see the for mydir1 the version is v896:
1574776 2024-06-12T19:39:42.129+0800 7f7913200640 7 mds.0.locker local_wrlock_finish on (iversion lock w=1) on [inode 0x10000000000 [...2,head] /mydir1/ auth{1=1} v896 ap=1 f(v0 m2024-06-12T19:21:08.170302+0800 5=0+5) n(v58 rc2024-06-12T19:26:26.856440+0800 810=802+8) (inest mix w=1) (ifile excl) (iversion lock w=1) caps={4621=pAsLsXsFsx/AsLsXsFsx@15},l=4621 | dirtyscattered=0 request=0 lock=2 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=0 replicated=1 dirty=1 waiter=0 authpin=1 0x5630b29f6100]
1574777 2024-06-12T19:39:42.129+0800 7f7913200640 7 mds.0.locker wrlock_finish on (inest mix w=1) on [inode 0x10000000000 [...2,head] /mydir1/ auth{1=1} v896 ap=1 f(v0 m2024-06-12T19:21:08.170302+0800 5=0+5) n(v58 rc2024-06-12T19:26:26.856440+0800 810=802+8) (inest mix w=1) (ifile excl) caps={4621=pAsLsXsFsx/AsLsXsFsx@15},l=4621 | dirtyscattered=0 request=0 lock=1 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=0 replicated=1 dirty=1 waiter=0 authpin=1 0x5630b29f6100]
While from mds.1 it's v864, which is less then the one in mds.0 always:
156907 2024-06-12T19:29:39.685+0800 7f4fc5c00640 0 mds.1.cache.ino(0x10000000009) build_backtrace [inode 0x10000000003 [...2,head] /mydir1/mydir2/mydir3/mydir4/ rep@0.1 fragtree_t(*^2 00*^1 000*^3 01*^3 10*^1 101*^3 11*^1 110*^3) v8479 f(v1 m2024-06-12T19:26:26.855440+0800 802=802+0) n(v2 rc2024-06-12T19:26:26.856440+0800 803=802+1)/n(v0 rc2024-06-12T19:17:17.341028+0800 802=801+1) (idft mix dirty) (inest mix) (ifile mix) | dirtyscattered=1 request=0 lock=0 importing=0 dirfrag=35 dirwaiter=0 waiter=0 discoverbase=0 export_pin=1 0x55dd01783b80] is_auth = 0
156908 2024-06-12T19:29:39.685+0800 7f4fc5c00640 0 mds.1.cache.ino(0x10000000009) build_backtrace [inode 0x10000000002 [...2,head] /mydir1/mydir2/mydir3/ rep@0.1 v263 f(v0 m2024-06-12T19:14:42.852507+0800 1=0+1) n(v15 rc2024-06-12T19:17:17.341028+0800 803=801+2) (inest mix) | dirfrag=1 dirwaiter=0 discoverbase=0 0x55dd01783600] is_auth = 0
156909 2024-06-12T19:29:39.685+0800 7f4fc5c00640 0 mds.1.cache.ino(0x10000000009) build_backtrace [inode 0x10000000001 [...2,head] /mydir1/mydir2/ rep@0.1 v750 f(v0 m2024-06-12T19:14:42.852507+0800 1=0+1) n(v65 rc2024-06-12T19:26:26.855440+0800 805=802+3)/n(v15 rc2024-06-12T19:17:17.341028+0800 804=801+3) (inest mix) | dirfrag=1 dirwaiter=0 discoverbase=0 0x55dd01783080] is_auth = 0
156910 2024-06-12T19:29:39.685+0800 7f4fc5c00640 0 mds.1.cache.ino(0x10000000009) build_backtrace [inode 0x10000000000 [...2,head] /mydir1/ rep@0.1 v739 f(v0 m2024-06-12T19:21:08.170302+0800 5=0+5) n(v58 rc2024-06-12T19:26:26.855440+0800 810=802+8)/n(v15 rc2024-06-12T19:17:17.341028+0800 805=801+4) (inest mix) | lock=0 dirfrag=1 dirwaiter=0 discoverbase=0 0x55dd01782b00] is_auth = 0
156911 2024-06-12T19:29:39.685+0800 7f4fc5c00640 0 mds.1.cache.ino(0x10000000009) build_backtrace [inode 0x1 [...2,head] / rep@0.1 v344 snaprealm=0x55dd017c8480 f(v0 m2024-06-12T19:14:42.840507+0800 1=0+1) n(v54 rc2024-06-12T19:26:26.856440+0800 811=802+9)/n(v0 rc2024-06-12T19:14:21.989460+0800 1=0+1) (inest mix) | dirfrag=1 dirwaiter=0 discoverbase=0 0x55dd01782580] is_auth = 0
156912 2024-06-12T19:29:39.685+0800 7f4fc5c00640 20 mds.1.cache.ino(0x10000000009) ondisk_read_retval: 0
156913 2024-06-12T19:29:39.685+0800 7f4fc5c00640 10 mds.1.cache.ino(0x10000000009) decoded 191 bytes of backtrace successfully
156914 2024-06-12T19:29:39.685+0800 7f4fc5c00640 0 mds.1.cache.ino(0x10000000009) _backtrace scrub on inode [inode 0x10000000009 [2,head] /mydir1/mydir2/mydir3/mydir4/file900 auth{0=1} v1274 ap=1 s=0 n(v0 rc2024-06-12T19:26:26.856440+0800 1=1+0) | ptrwaiter=0 lock=0 importingcaps=0 caps=0 dirtyparent=0 replicated=1 dirty=0 authpin=1 scrubqueue=0 0x55dd01a70c00]: {"performed_validation":true,"passed_validation":false,"backtrace":{"checked":true,"passed":true,"read_ret_val":0,"ondisk_value":"(3)0x10000000009:[<0x10000000003/file900 v1274 auth: unknown>,<0x10000000002/mydir4 v8477 auth: unknown>,<0x10000000001/mydir3 v263 auth: unknown>,<0x10000000000/mydir2 v750 auth: unknown>,<0x1/mydir1 v739 auth: unknown>]//[]","memoryvalue":"(3)0x10000000009:[<0x10000000003/file900 v1274 auth: no>,<0x10000000002/mydir4 v8479 auth: no>,<0x10000000001/mydir3 v263 auth: no>,<0x10000000000/mydir2 v750 auth: no>,<0x1/mydir1 v739 auth: no>]//[]","error_str":""},"raw_stats":{"checked":false,"passed":false,"read_ret_val":0,"ondisk_value.dirstat":"f()","ondisk_value.rstat":"n()","memory_value.dirstat":"f()","memory_value.rstat":"n()","error_str":""},"return_code":0}
...
578512 2024-06-12T19:38:13.107+0800 7f4fcde00640 7 mds.1.locker handle_file_lock a=lock on (ifile sync) from mds.0 [inode 0x10000000000 [...2,head] /mydir1/ rep@0.1 v864 f(v0 m2024-06-12T19:21:08.170302+0800 5=0+5) n(v58 rc2024-06-12T19:26:26.855440+0800 810=802+8)/n(v15 rc2024-06-12T19:17:17.341028+0800 805=801+4) (inest mix) | lock=0 dirfrag=1 dirwaiter=0 discoverbase=0 0x55dd01782b00]
578513 2024-06-12T19:38:13.107+0800 7f4fcde00640 10 mds.1.locker eval_gather (ifile sync->lock) on [inode 0x10000000000 [...2,head] /mydir1/ rep@0.1 v864 f(v0 m2024-06-12T19:21:08.170302+0800 5=0+5) n(v58 rc2024-06-12T19:26:26.855440+0800 810=802+8)/n(v15 rc2024-06-12T19:17:17.341028+0800 805=801+4) (inest mix) (ifile sync->lock) | lock=0 dirfrag=1 dirwaiter=0 discoverbase=0 0x55dd01782b00]
So next step is try to persist the mds.0 to disk and then do the scrub.
Need to find one way to trigger that. Will continue tomorrow.
Updated by Xiubo Li almost 2 years ago · Edited
This time I successfully reproduced it please see the logs:
7638111 2024-06-13T10:49:21.823+0800 7f4fc5c00640 0 mds.1.cache.ino(0x1000000000c) build_backtrace [inode 0x10000000002 [...2,head] /dir/mydir1/mydir2/mydir3/ rep@0.1 v12310 f(v0 m2024-06-13T10:48:47.934472+0800 5=3+2) n(v29 rc2024-06-13T10:45:33.366606+0800 808=805+3)/n(v15 rc2024-06-12T19:17:17.341028+0800 803=801+2) (inest mix) (ifile mix) | dirtyscattered=0 request=0 lock=0 importing=0 dirfrag=1 dirwaiter=0 discoverbase=0 0x55dd01783600] is_auth = 0
7638112 2024-06-13T10:49:21.823+0800 7f4fc5c00640 0 mds.1.cache.ino(0x1000000000c) build_backtrace [inode 0x10000000001 [...2,head] /dir/mydir1/mydir2/ rep@0.1 v11835 f(v0 m2024-06-12T19:14:42.852507+0800 1=0+1) n(v74 rc2024-06-13T10:45:33.366606+0800 809=805+4)/n(v15 rc2024-06-12T19:17:17.341028+0800 804=801+3) (inest mix) (ifile lock) | lock=0 dirfrag=1 dirwaiter=0 discoverbase=0 0x55dd01783080] is_auth = 0
7638113 2024-06-13T10:49:21.823+0800 7f4fc5c00640 0 mds.1.cache.ino(0x1000000000c) build_backtrace [inode 0x10000000000 [...2,head] /dir/mydir1/ rep@0.1 v11749 f(v0 m2024-06-13T08:26:34.021146+0800 6=0+6) n(v59 rc2024-06-13T08:26:34.021146+0800 811=802+9)/n(v15 rc2024-06-12T19:17:17.341028+0800 805=801+4) (inest mix) (ifile lock) | request=0 lock=0 dirfrag=1 dirwaiter=0 discoverbase=0 0x55dd01782b00] is_auth = 0
7638114 2024-06-13T10:49:21.823+0800 7f4fc5c00640 0 mds.1.cache.ino(0x1000000000c) build_backtrace [inode 0x1000000000b [...2,head] /dir/ rep@0.1 v11830 f(v0 m2024-06-13T08:39:51.800738+0800 1=0+1) n(v4 rc2024-06-13T10:48:17.330179+0800 820=809+11)/n(v01=0+1) (inest mix) (ifile lock) | lock=0 dirfrag=1 0x55dd01cac680] is_auth = 0
7638115 2024-06-13T10:49:21.823+0800 7f4fc5c00640 0 mds.1.cache.ino(0x1000000000c) build_backtrace [inode 0x1 [...2,head] / rep@0.1 v5227 snaprealm=0x55dd017c8480 f(v0 m2024-06-13T08:39:51.800738+0800 1=0+1) n(v56 rc2024-06-13T09:01:59.881423+0800 814=802+12)/n(v0 rc2024-06-12T19:14:21.989460+0800 1=0+1) (inest mix) | lock=0 dirfrag=1 dirwaiter=0 discoverbase=0 0x55dd01782580] is_auth = 0
7638116 2024-06-13T10:49:21.823+0800 7f4fc5c00640 20 mds.1.cache.ino(0x1000000000c) ondisk_read_retval: 0
7638117 2024-06-13T10:49:21.823+0800 7f4fc5c00640 10 mds.1.cache.ino(0x1000000000c) decoded 186 bytes of backtrace successfully
7638118 2024-06-13T10:49:21.823+0800 7f4fc5c00640 0 mds.1.cache.ino(0x1000000000c) _backtrace scrub on inode [inode 0x1000000000c [2,head] /dir/mydir1/mydir2/mydir3/file1 auth{0=1} v19766 ap=1 s=10 n(v0 rc2024-06-13T10:48:34.298342+0800 b10 1=1+0) (iauth excl) (ifile excl) (ixattr excl) cr={4803=0-4194304@1} caps={4803=pAsxLsXsxFsxcrwb/pAsxXsxFxwb@1},l=4803 | importingcaps=0 caps=1 replicated=1 authpin=1 scrubqueue=0 0x55dd0235d600]: {"performed_validation":true,"passed_validation":false,"backtrace":{"checked":true,"passed":true,"read_ret_val":0,"ondisk_value":"(3)0x1000000000c:[<0x10000000002/file1 v19558 auth: unknown>,<0x10000000001/mydir3 v12254 auth: unknown>,<0x10000000000/mydir2 v11787 auth: unknown>,<0x1000000000b/mydir1 v11783 auth: unknown>,<0x1/dir v11786 auth: unknown>]//[]","memoryvalue":"(3)0x1000000000c:[<0x10000000002/file1 v19766 auth: no>,<0x10000000001/mydir3 v12310 auth: no>,<0x10000000000/mydir2 v11835 auth: no>,<0x1000000000b/mydir1 v11749 auth: no>,<0x1/dir v11830 auth: no>]//[]","error_str":""},"raw_stats":{"checked":false,"passed":false,"read_ret_val":0,"ondisk_value.dirstat":"f()","ondisk_value.rstat":"n()","memory_value.dirstat":"f()","memory_value.rstat":"n()","error_str":""},"return_code": 0}
"ondisk_value":"(3)0x1000000000c:[<0x10000000002/file1 v19558 auth: unknown>,<0x10000000001/mydir3 v12254 auth: unknown>,<0x10000000000/mydir2 v11787 auth: unknown>,<0x1000000000b/mydir1 v11783 auth: unknown>,<0x1/dir v11786 auth: unknown>] "memoryvalue":" (3)0x1000000000c:[<0x10000000002/file1 v19766 auth: no>, <0x10000000001/mydir3 v12310 auth: no>, <0x10000000000/mydir2 v11835 auth: no>, <0x1000000000b/mydir1 v11749 auth: no>, <0x1/dir v11830 auth: no>]
The detail steps:
1, add the debug patch as mentioned in https://tracker.ceph.com/issues/57656#note-19.
2, set max_mds to 2:
$ ./bin/ceph fs set a max_mds 2
3, set the following configs:
mds_bal_split_size = 100
mds_bal_merge_size = 50
mds_bal_replicate_threshold = 1
4, mount a kclient and then do:
$ mkdir mydir1/mydir2/mydir3/mydir4 -p
$ setfattr -n ceph.dir.pin -v 1 mydir1/mydir2/mydir3/mydir4
$ touch mydir1/mydir2/mydir3/mydir4/file$i // will trigger banlancer to load the ancestor inodes to mds.1
$ touch mydir1/mydir2/mydir3/file1
$ for i in {0..10}; do mkdir mydir1/mydir2/mydir3/dir$i; done
5, flush the journal logs from mds.0:
$ ./bin/ceph tell mds.0 flush journal
6, flush path mydir1/mydir2/mydir3/file1:
$ ./bin/ceph tell mds.0 flush_path /mydir1/mydir2/mydir3/file1
7, export mydir3 to mds.1, then the mydir3/file1 will exported to mds.1 too:
$ ./bin/ceph tell mds.0 export dir /mydir1/mydir2/mydir3/ 1
8, do the scrub for path /mydir1/mydir2/mydir3/file1:
./bin/ceph tell mds.0 scrub start /mydir1/mydir2/mydir3/file1 recursive
Updated by Venky Shankar over 1 year ago
- Related to Bug #67867: qa: dbench.sh fails with Resource Temporarily Unavailable added
Updated by Milind Changire over 1 year ago
Updated by Milind Changire over 1 year ago
main:
https://pulpito.ceph.com/mchangir-2024-12-10_11:05:18-fs-wip-mchangir-testing-mon-caps-main-debug-testing-default-smithi/8028289
https://pulpito.ceph.com/mchangir-2024-12-10_11:05:18-fs-wip-mchangir-testing-mon-caps-main-debug-testing-default-smithi/8028341
https://pulpito.ceph.com/mchangir-2024-12-10_11:05:18-fs-wip-mchangir-testing-mon-caps-main-debug-testing-default-smithi/8028452
Updated by Milind Changire about 1 year ago
squid:
https://pulpito.ceph.com/vshankar-2024-11-25_18:02:39-fs-wip-vshankar-testing-20241125.064001-squid-debug-testing-default-smithi/8008348
https://pulpito.ceph.com/vshankar-2024-11-25_18:02:39-fs-wip-vshankar-testing-20241125.064001-squid-debug-testing-default-smithi/8008410
https://pulpito.ceph.com/vshankar-2024-11-25_18:02:39-fs-wip-vshankar-testing-20241125.064001-squid-debug-testing-default-smithi/8008518
Updated by Milind Changire about 1 year ago
Updated by Milind Changire about 1 year ago
Updated by Patrick Donnelly 22 days ago · Edited
- Subject changed from [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) to dbench: write failed on handle 10009 (Resource temporarily unavailable)
- Status changed from Fix Under Review to Need More Info
- Assignee deleted (
Xiubo Li) - Backport set to tentacle,squid
- Pull request ID deleted (
57962)
Moving this ticket back to unassigned. Still in tentacle with stock kernel.
/teuthology/pdonnell-2026-02-26_05:21:09-fs-wip-jcollin-testing-20260213.155635-tentacle-distro-default-trial/71310/teuthology.log
Original PR: https://github.com/ceph/ceph/pull/57962