Bug #69481
openCannot _replay BlueFS log
0%
Description
2025-01-09T21:32:08.301596693Z /builddir/build/BUILD/ceph-18.2.1/src/os/bluestore/BlueFS.cc: 1592: FAILED ceph_assert(delta.offset == fnode.allocated)
2025-01-09T21:32:08.301625002Z 2025-01-09T21:32:08.300+0000 7f55de0d0c80 -1 bluefs _replay invalid op_file_update_inc, new extents miss end of file fnode=file(ino 208141 size 0x3115408 mtime 2025-01-09T20:04:52.331464+0000 allocated 3120000 alloc_commit 3120000 extents [1:0x3e5171d0000~3120000]) delta=delta(ino 208141 size 0x3115408 mtime 2025-01-09T20:04:52.331889+0000 offset 3115408 extents [])
Updated by Adam Kupczyk about 1 year ago
os/bluestore: Fix BlueFS::truncate() #61314
akupczyk-2025-01-10_14:29:02-rados-wip-aclamk-bluefs-truncate-fix-distro-default-smithi
0-100/200 rados batch
[8070344]
rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async objectstore/{bluestore-options/write$/{write_v1} bluestore/bluestore-stupid} rados supported-random-distro$/{centos_latest} thrashers/sync-many workloads/rados_api_tests}
https://tracker.ceph.com/issues/69405
[8070361]
rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}}
teuthology log, at breaking point (or success and teardown?) :
2025-01-10T15:31:06.968 INFO:tasks.workunit.client.0.smithi170.stderr:+ sudo ls /dev/disk/by-path
2025-01-10T15:31:06.968 INFO:tasks.workunit.client.0.smithi170.stderr:+ grep iscsi
2025-01-10T15:31:06.982 INFO:tasks.workunit.client.0.smithi170.stdout:ip-172.21.15.170:3260-iscsi-iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw-lun-0
2025-01-10T15:31:06.983 INFO:tasks.workunit.client.0.smithi170.stdout:ip-172.21.15.170:3260-iscsi-iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw-lun-1
2025-01-10T15:31:06.984 INFO:teuthology.orchestra.run:Running command with timeout 3600
2025-01-10T15:31:06.984 DEBUG:teuthology.orchestra.run.smithi170:> sudo rm rf - /home/ubuntu/cephtest/mnt.0/client.0/tmp
2025-01-10T15:31:07.056 INFO:tasks.workunit:Stopping ['cephadm/test_iscsi_pids_limit.sh', 'cephadm/test_iscsi_etc_hosts.sh', 'cephadm/test_iscsi_setup.sh'] on client.0...
2025-01-10T15:31:07.057 DEBUG:teuthology.orchestra.run.smithi170:> sudo rm rf - /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0
2025-01-10T15:31:07.410 DEBUG:teuthology.parallel:result is None
2025-01-10T15:31:07.410 DEBUG:teuthology.orchestra.run.smithi170:> sudo rm rf - /home/ubuntu/cephtest/mnt.0/client.0
2025-01-10T15:31:07.438 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0
2025-01-10T15:31:07.438 DEBUG:teuthology.orchestra.run.smithi170:> rmdir -- /home/ubuntu/cephtest/mnt.0
2025-01-10T15:31:07.492 INFO:journalctl@ceph.mon.a.smithi170.stdout:Jan 10 15:31:07 smithi170 ceph-mon33377: from='client.14278 -' entity='client.iscsi.foo.smithi170.rjmqcc' cmd=[{"prefix": "service status", "format": "json"}]: dispatch
2025-01-10T15:31:07.518 INFO:tasks.workunit:Deleted artificial mount point /home/ubuntu/cephtest/mnt.0/client.0
2025-01-10T15:31:07.518 DEBUG:teuthology.run_tasks:Unwinding manager cephadm
2025-01-10T15:31:07.528 INFO:tasks.cephadm:Teardown begin
......
2025-01-10T15:31:43.386 INFO:tasks.cephadm.osd.2:Stopped osd.2
2025-01-10T15:31:43.386 DEBUG:teuthology.orchestra.run.smithi170:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid e3d1808e-cf66-11ef-9d61-a94da4efb481 --force --keep-logs
!!!! and somehow we are waiting
2025-01-10T23:14:18.194 DEBUG:teuthology.exit:Got signal 15; running 1 handler...
2025-01-10T23:14:18.219 DEBUG:teuthology.task.console_log:Killing console logger for smithi170
2025-01-10T23:14:18.219 DEBUG:teuthology.exit:Finished running handlers
Everything closed at ~15:31 except tcmu-runner that kept working:
....
2025-01-10 23:09:02.343 7 [ERROR] tcmu_rbd_image_open:640 rbd/foo.disk_2: Could not connect to cluster. (Err -110)
2025-01-10 23:14:03.348 7 [ERROR] tcmu_rbd_image_open:640 rbd/foo.disk_2: Could not connect to cluster. (Err -110)
I see ceph-volume complain, but its at deactivate, I do not think it is an actual error.
[2025-01-10 15:31:41,350][ceph_volume.devices.lvm.deactivate][ERROR ] No data or block LV found for OSD2
Diagnosys:
Everything is OK, but tcmu-runner failed to stop and dragged run into DEAD ?
[8070365]
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/few msgr/async-v2only objectstore/{bluestore-options/write$/{write_v1} bluestore/bluestore-bitmap} rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/rados_api_tests}
2025-01-10T15:40:35.049 INFO:tasks.workunit.client.0.smithi169.stdout: pool: [ FAILED ] 2 tests, listed below:
2025-01-10T15:40:35.049 INFO:tasks.workunit.client.0.smithi169.stdout: pool: [ FAILED ] NeoRadosPools.PoolCreateDelete
2025-01-10T15:40:35.049 INFO:tasks.workunit.client.0.smithi169.stdout: pool: [ FAILED ] NeoRadosPools.PoolCreateWithCrushRule
2025-01-10T15:40:35.049 INFO:tasks.workunit.client.0.smithi169.stdout: pool:
2025-01-10T15:40:35.049 INFO:tasks.workunit.client.0.smithi169.stdout: pool: 2 FAILED TESTS
https://tracker.ceph.com/issues/69405
[8070366]
rados/mgr/{clusters/{2-node-mgr} debug/mgr distro/{ubuntu_latest} mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-options/write$/{write_v2}} tasks/{1-install 2-ceph 3-mgrmodules 4-units/module_selftest}}
2025-01-10T15:47:02.857 INFO:tasks.cephfs_test_runner:test_selftest_command_spam (tasks.mgr.test_module_selftest.TestModuleSelftest) ... ERROR
https://tracker.ceph.com/issues/69494
[8070390]
rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/{bluestore-options/write$/{write_v2} bluestore/bluestore-comp-zlib} rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap_host thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1}
Test gave up but it is no noticed, it propagates to DEAD.
025-01-10T16:47:59.279 INFO:tasks.ceph.mon.c.smithi045.stderr:2025-01-10T16:47:59.265+0000 7fcb923bc640 -1 mon.c@2(peon) e1 * Got Signal Terminated
2025-01-10T16:47:59.279 INFO:tasks.ceph.mon.a.smithi008.stderr:2025-01-10T16:47:59.264+0000 7f246b235640 -1 received signal: Terminated from /usr/bin/python3 /usr/bin/daemon-helper kill ceph-mon -f --cluster ceph -i a (PID: 26422) UID: 0
2025-01-10T16:47:59.279 INFO:tasks.ceph.mon.a.smithi008.stderr:2025-01-10T16:47:59.264+0000 7f246b235640 -1 mon.a@0(leader) e1 Got Signal Terminated *
2025-01-10T16:47:59.471 INFO:tasks.ceph.mgr.y.smithi008.stderr:daemon-helper: command crashed with signal 15
2025-01-10T23:50:30.161 DEBUG:teuthology.exit:Got signal 15; running 1 handler...
2025-01-10T23:50:30.222 DEBUG:teuthology.task.console_log:Killing console logger for smithi008
2025-01-10T23:50:30.223 DEBUG:teuthology.task.console_log:Killing console logger for smithi028
2025-01-10T23:50:30.223 DEBUG:teuthology.task.console_log:Killing console logger for smithi045
2025-01-10T23:50:30.224 DEBUG:teuthology.task.console_log:Killing console logger for smithi112
2025-01-10T23:50:30.224 DEBUG:teuthology.exit:Finished running handlers
OSD 6 restarted
2025-01-10T16:06:17.709+0000 7f0cd0cffa40 0 ceph version 19.3.0-6769-gd2b0a97c (d2b0a97cfb2844af08135bac38fc4dc7353510b6) squid (dev), process ceph-osd, pid 32335
OSD 3 restarted
2025-01-10T16:11:01.579+0000 7fb09ee65a40 0 ceph version 19.3.0-6769-gd2b0a97c (d2b0a97cfb2844af08135bac38fc4dc7353510b6) squid (dev), process ceph-osd, pid 34227
OSD 9 restarted
2025-01-10T16:13:18.793+0000 7f2607af8a40 0 ceph version 19.3.0-6769-gd2b0a97c (d2b0a97cfb2844af08135bac38fc4dc7353510b6) squid (dev), process ceph-osd, pid 34964
OSD 12 down, never got up.
2025-01-10T16:16:06.317+0000 7f6caf27f640 20 bluestore.MempoolThread(0x559901ee9c48) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2152 kv_onode_alloc: 234881024 kv_onode_used: 1377248 meta_alloc: 1140850688 meta_used: 10202056 data_alloc: 251658240 data_used: 36249924
2025-01-10T16:16:23.443 INFO:tasks.ceph.ceph_manager.ceph:waiting for clean
... a lot of not active+clean pgs ....
... cleaning, progressing ...
.....
.....
recovery is stuck forever
.....
.....
2025-01-10T16:22:44.919 INFO:tasks.ceph.ceph_manager.ceph:PG 3.2 is not active+clean
2025-01-10T16:22:44.919 INFO:tasks.ceph.ceph_manager.ceph:{'pgid': '3.2', 'version': "701'767", 'reported_seq': 292, 'reported_epoch': 916, 'state': 'down+remapped', 'last_fresh': '2025-01-10T16:22:38.768119+0000', 'last_change': '2025-01-10T16:16:13.665833+0000', 'last_active': '2025-01-10T16:16:06.625447+0000', 'last_peered': '2025-01-10T16:16:06.483381+0000', 'last_clean': '2025-01-10T16:16:00.846309+0000', 'last_became_active': '2025-01-10T16:16:04.792145+0000', 'last_became_peered': '2025-01-10T16:16:04.792145+0000', 'last_unstale': '2025-01-10T16:22:38.768119+0000', 'last_undegraded': '2025-01-10T16:22:38.768119+0000', 'last_fullsized': '2025-01-10T16:22:38.768119+0000', 'mapping_epoch': 714, 'log_start': "0'0", 'ondisk_log_start': "0'0", 'created': 34, 'last_epoch_clean': 609, 'parent': '0.0', 'parent_split_bits': 0, 'last_scrub': "699'763", 'last_scrub_stamp': '2025-01-10T16:15:59.905131+0000', 'last_deep_scrub': "0'0", 'last_deep_scrub_stamp': '2025-01-10T16:04:39.199229+0000', 'last_clean_scrub_stamp': '2025-01-10T16:15:59.905131+0000', 'objects_scrubbed': 18, 'log_size': 767, 'log_dups_size': 0, 'ondisk_log_size': 767, 'stats_invalid': False, 'dirty_stats_invalid': False, 'omap_stats_invalid': False, 'hitset_stats_invalid': False, 'hitset_bytes_stats_invalid': False, 'pin_stats_invalid': False, 'manifest_stats_invalid': False, 'snaptrimq_len': 0, 'last_scrub_duration': 1, 'scrub_schedule': 'no scrub is scheduled', 'scrub_duration': 483, 'objects_trimmed': 0, 'snaptrim_duration': 0.305070167, 'stat_sum': {'num_bytes': 19144704, 'num_objects': 18, 'num_object_clones': 16, 'num_object_copies': 54, 'num_objects_missing_on_primary': 43, 'num_objects_missing': 43, 'num_objects_degraded': 0, 'num_objects_misplaced': 0, 'num_objects_unfound': 0, 'num_objects_dirty': 18, 'num_whiteouts': 0, 'num_read': 28, 'num_read_kb': 2447, 'num_write': 98, 'num_write_kb': 10744, 'num_scrub_errors': 0, 'num_shallow_scrub_errors': 0, 'num_deep_scrub_errors': 0, 'num_objects_reco
vered': 64, 'num_bytes_recovered': 8781824, 'num_keys_recovered': 0, 'num_objects_omap': 0, 'num_objects_hit_set_archive': 0, 'num_bytes_hit_set_archive': 0, 'num_flush': 0, 'num_flush_kb': 0, 'num_evict': 0, 'num_evict_kb': 0, 'num_promote': 0, 'num_flush_mode_high': 0, 'num_flush_mode_low': 0, 'num_evict_mode_some': 0, 'num_evict_mode_full': 0, 'num_objects_pinned': 0, 'num_legacy_snapsets': 0, 'num_large_omap_objects': 0, 'num_objects_manifest': 0, 'num_omap_bytes': 0, 'num_omap_keys': 0, 'num_objects_repaired': 0}, 'up': [11, 15, 0], 'acting': [11, 2147483647, 2147483647], 'avail_no_missing': [], 'object_location_counts': [], 'blocked_by': [12], 'up_primary': 11, 'acting_primary': 11, 'purged_snaps': [{'start': '125', 'length': '1'}, {'start': '13a', 'length': '1'}]}
2025-01-10T16:22:45.300 INFO:tasks.ceph.ceph_manager.ceph:no progress seen, keeping timeout for now
.....
2025-01-10T16:47:46.364 INFO:tasks.thrashosds.thrasher:Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_ceph-c_d2b0a97cfb2844af08135bac38fc4dc7353510b6/qa/tasks/ceph_manager.py", line 192, in wrapper
return func(self)
File "/home/teuthworker/src/git.ceph.com_ceph-c_d2b0a97cfb2844af08135bac38fc4dc7353510b6/qa/tasks/ceph_manager.py", line 1486, in _do_thrash
self.test_map_discontinuity()
File "/home/teuthworker/src/git.ceph.com_ceph-c_d2b0a97cfb2844af08135bac38fc4dc7353510b6/qa/tasks/ceph_manager.py", line 1264, in test_map_discontinuity
self.ceph_manager.wait_for_clean(
File "/home/teuthworker/src/git.ceph.com_ceph-c_d2b0a97cfb2844af08135bac38fc4dc7353510b6/qa/tasks/ceph_manager.py", line 2930, in wait_for_clean
assert time.time() - start < timeout, \
AssertionError: wait_for_clean: failed before timeout expired
[8070406]
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/{bluestore-options/write$/{write_v2} bluestore/bluestore-low-osd-mem-target} rados supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/dedup-io-snaps}
https://tracker.ceph.com/issues/68518
[8070411]
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/few msgr/async-v2only objectstore/{bluestore-options/write$/{write_v2} bluestore/bluestore-bitmap} rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/rados_api_tests}
https://tracker.ceph.com/issues/69405
[8070421]
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/{bluestore-options/write$/{write_random} bluestore/bluestore-low-osd-mem-target} rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/set-chunks-read}
Failed to deploy.
https://pulpito.ceph.com/akupczyk-2025-01-10_14:32:27-rados-wip-aclamk-bluefs-truncate-fix-distro-default-smithi/
100-200/200 rados batch
[8070446]
rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zlib} tasks/e2e}
https://tracker.ceph.com/issues/68668
[8070458]
rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/{bluestore-options/write$/{write_random} bluestore/bluestore-stupid} rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/cache-snaps}
Failed to deploy.
[8070468]
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/{bluestore-options/write$/{write_random} bluestore/bluestore-comp-zstd} rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/rados_api_tests}
https://tracker.ceph.com/issues/69405
[8070470]
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async objectstore/{bluestore-options/write$/{write_random} bluestore/bluestore-low-osd-mem-target} rados supported-random-distro$/{centos_latest} thrashers/pggrow_host thrashosds-health workloads/radosbench}
2025-01-10T17:35:56.612 INFO:tasks.thrashosds.thrasher:Setting 3.8 to [15, 5, 9, 4]
2025-01-10T17:35:56.612 INFO:tasks.thrashosds.thrasher:cmd ['osd', 'pg-upmap-items', '3.8', '15', '5', '9', '4']
2025-01-10T17:36:02.673 INFO:tasks.thrashosds.thrasher:in_osds: [2, 6, 10, 14] out_osds: [1, 5, 9, 13, 0, 4, 8, 12, 3, 7, 11, 15] dead_osds: [] live_osds: [0, 4, 8, 12, 1, 5, 9, 13, 2, 6, 10, 14, 3, 7, 11, 15]
.....
2025-01-10T17:36:03.514 INFO:tasks.thrashosds.thrasher:Setting 2.1 to [1, 2]
2025-01-10T17:36:03.514 INFO:tasks.thrashosds.thrasher:cmd ['osd', 'pg-upmap', '2.1', '1', '2']
2025-01-10T17:36:09.336 INFO:tasks.thrashosds.thrasher:in_osds: [2, 6, 10, 14] out_osds: [1, 5, 9, 13, 0, 4, 8, 12, 3, 7, 11, 15] dead_osds: [] live_osds: [0, 4, 8, 12, 1, 5, 9, 13, 2, 6, 10, 14, 3, 7, 11, 15]
2025-01-10T17:36:09.336 INFO:tasks.thrashosds.thrasher:choose_action: min_in 2 min_out 0 min_live 2 min_dead 0 chance_down 0.40
2025-01-10T17:36:09.336 INFO:tasks.thrashosds.thrasher:check thrash_hosts: in_osds > minin
2025-01-10T17:36:09.336 INFO:tasks.thrashosds.thrasher:primary_affinity
2025-01-10T17:36:09.336 INFO:tasks.thrashosds.thrasher:Setting osd 6 primary_affinity to 0.000000
2025-01-10T17:36:14.708 INFO:tasks.thrashosds.thrasher:in_osds: [2, 6, 10, 14] out_osds: [1, 5, 9, 13, 0, 4, 8, 12, 3, 7, 11, 15] dead_osds: [] live_osds: [0, 4, 8, 12, 1, 5, 9, 13, 2, 6, 10, 14, 3, 7, 11, 15]
.....
2025-01-10T17:38:16.369 INFO:tasks.thrashosds.thrasher:in_osds: [12, 13] out_osds: [1, 5, 9, 0, 4, 8, 3, 7, 11, 15, 2, 6, 10, 14] dead_osds: [0, 8] live_osds: [4, 12, 1, 5, 9, 13, 2, 6, 10, 14, 3, 7, 11, 15]
rados bench cannot work
2025-01-10T17:39:32.502 INFO:tasks.radosbench.radosbench.0.smithi192.stdout:Maintaining 16 concurrent writes of 65536 bytes to objects of size 65536 for up to 90 seconds or 0 objects
2025-01-10T17:39:32.502 INFO:tasks.radosbench.radosbench.0.smithi192.stdout:Object prefix: benchmark_data_smithi192_52561
2025-01-10T17:39:32.502 INFO:tasks.radosbench.radosbench.0.smithi192.stdout: sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
2025-01-10T17:39:32.503 INFO:tasks.radosbench.radosbench.0.smithi192.stdout: 0 0 0 0 0 0 - 0
2025-01-10T17:39:33.533 INFO:tasks.radosbench.radosbench.0.smithi192.stdout: 1 16 216 200 12.4982 12.5 0.00157024 0.00711996
2025-01-10T17:39:34.450 INFO:tasks.radosbench.radosbench.0.smithi192.stdout: 2 16 216 200 6.24903 0 - 0.00711996
2025-01-10T17:39:35.450 INFO:tasks.radosbench.radosbench.0.smithi192.stdout: 3 16 216 200 4.16607 0 - 0.00711996
2025-01-10T17:39:36.548 INFO:tasks.radosbench.radosbench.0.smithi192.stdout: 4 16 216 200 3.12456 0 - 0.00711996
2025-01-10T17:39:37.451 INFO:tasks.radosbench.radosbench.0.smithi192.stdout: 5 16 216 200 2.49967 0 - 0.00711996
2025-01-10T17:39:38.451 INFO:tasks.radosbench.radosbench.0.smithi192.stdout: 6 16 216 200 2.08307 0 - 0.00711996
[8070477]
rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/{bluestore-options/write$/{write_random} bluestore/bluestore-bitmap} rados tasks/rados_cls_all validater/valgrind}
"2025-01-10T18:12:52.026762+0000 mon.a (mon.0) 351 : cluster [WRN] Health check failed: 1 OSD experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)" in cluster log
[8070509]
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/{bluestore-options/write$/{write_v2} bluestore/bluestore-comp-zstd} rados supported-random-distro$/{ubuntu_latest} thrashers/careful_host thrashosds-health workloads/rados_api_tests}