Bug #72890
openrados/thrash-old-clients cluster create and lots of scrubs then timed out
0%
Description
2025-08-20T04:23:26.480 INFO:journalctl@ceph.mon.a.smithi005.stdout:Aug 20 04:23:26 smithi005 ceph-mon32340: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=1737265018})
2025-08-20T04:23:26.481 INFO:journalctl@ceph.mon.a.smithi005.stdout:Aug 20 04:23:26 smithi005 ceph-mon32340: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=137647994})
2025-08-20T04:23:26.481 INFO:journalctl@ceph.mon.a.smithi005.stdout:Aug 20 04:23:26 smithi005 ceph-mon32340: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=3586421394})
2025-08-20T04:23:26.481 INFO:journalctl@ceph.mon.a.smithi005.stdout:Aug 20 04:23:26 smithi005 ceph-mon32340: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=2287345961})
2025-08-20T04:23:26.481 INFO:journalctl@ceph.mon.a.smithi005.stdout:Aug 20 04:23:26 smithi005 ceph-mon32340: scrub ok on 0,1,2: ScrubResult(keys {logm=95,mdsmap=3,mgr=2} crc {logm=1249250401,mdsmap=1041094236,mgr=1102074085})
2025-08-20T04:23:26.481 INFO:journalctl@ceph.mon.a.smithi005.stdout:Aug 20 04:23:26 smithi005 ceph-mon32340: scrub ok on 0,1,2: ScrubResult(keys {mgr=16,mgr_command_descs=1,mgrstat=16,mon_config_key=67} crc {mgr=3021529252,mgr_command_descs=1106606917,mgrstat=927985028,mon_config_key=3416016077})
2025-08-20T04:23:26.481 INFO:journalctl@ceph.mon.a.smithi005.stdout:Aug 20 04:23:26 smithi005 ceph-mon32340: scrub ok on 0,1,2: ScrubResult(keys {mon_config_key=100} crc {mon_config_key=1697361991})
2025-08-20T04:23:26.481 INFO:journalctl@ceph.mon.a.smithi005.stdout:Aug 20 04:23:26 smithi005 ceph-mon32340: scrub ok on 0,1,2: ScrubResult(keys {mon_config_key=52,monmap=5,nvmeofgw=3,osd_pg_creating=1,osdmap=13} crc {mon_config_key=2062148741,monmap=1514957604,nvmeofgw=3360707892,osd_pg_creating=1429930521,osdmap=660087193})
2025-08-20T04:23:26.586 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {auth=36,config=2,health=13,kv=49} crc {auth=1995396108,config=4178277379,health=2358046931,kv=10522087})
2025-08-20T04:23:26.586 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {kv=4,logm=96} crc {kv=1873782856,logm=2116167881})
2025-08-20T04:23:26.586 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=451675026})
2025-08-20T04:23:26.586 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=3513288693})
2025-08-20T04:23:26.586 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=1630115004})
2025-08-20T04:23:26.586 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=1458792183})
2025-08-20T04:23:26.586 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=2779996686})
2025-08-20T04:23:26.586 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=1001678024})
2025-08-20T04:23:26.587 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=3211679126})
2025-08-20T04:23:26.587 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=1962157845})
2025-08-20T04:23:26.587 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=2131728565})
2025-08-20T04:23:26.587 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=330912209})
2025-08-20T04:23:26.587 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=404325981})
2025-08-20T04:23:26.587 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=3378792078})
2025-08-20T04:23:26.587 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=2078952969})
2025-08-20T04:23:26.587 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=1737265018})
2025-08-20T04:23:26.587 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=137647994})
2025-08-20T04:23:26.587 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=3586421394})
2025-08-20T04:23:26.587 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=2287345961})
2025-08-20T04:23:26.587 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {logm=95,mdsmap=3,mgr=2} crc {logm=1249250401,mdsmap=1041094236,mgr=1102074085})
2025-08-20T04:23:26.588 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {mgr=16,mgr_command_descs=1,mgrstat=16,mon_config_key=67} crc {mgr=3021529252,mgr_command_descs=1106606917,mgrstat=927985028,mon_config_key=3416016077})
2025-08-20T04:23:26.588 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {mon_config_key=100} crc {mon_config_key=1697361991})
2025-08-20T04:23:26.588 INFO:journalctl@ceph.mon.b.smithi186.stdout:Aug 20 04:23:26 smithi186 ceph-mon35320: scrub ok on 0,1,2: ScrubResult(keys {mon_config_key=52,monmap=5,nvmeofgw=3,osd_pg_creating=1,osdmap=13} crc {mon_config_key=2062148741,monmap=1514957604,nvmeofgw=3360707892,osd_pg_creating=1429930521,osdmap=660087193})
2025-08-20T04:23:57.143 DEBUG:teuthology.exit:Got signal 15; running 1 handler...
2025-08-20T04:23:57.144 DEBUG:teuthology.task.console_log:Killing console logger for smithi005
2025-08-20T04:23:57.146 DEBUG:teuthology.task.console_log:Killing console logger for smithi186
2025-08-20T04:23:57.146 DEBUG:teuthology.task.console_log:Killing console logger for smithi188
2025-08-20T04:23:57.146 DEBUG:teuthology.exit:Finished running handlers
Found on main
/a/yuriw-2025-08-19_14:49:40-rados-wip-yuri-testing-2025-08-18-1127-distro-default-smithi/8451470
Updated by Nitzan Mordechai 6 months ago
- Is duplicate of Bug #70247: Non-zero exit code 1 from systemctl reset-failed ceph-47356c0e-f761-11ef-bb88-bd4984dce30f@mon.a added
Updated by Nitzan Mordechai 6 months ago
No osds or monitors were started, cephadm was issuing an error:
2025-08-19 20:38:16,432 7fab9768c740 DEBUG Determined image: 'quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph@sha256:8669133f9801d8d0a922a3568dfde2eab748bf543ea0d875ecd39e2dad925f19'
2025-08-19 20:38:16,468 7fab9768c740 INFO Non-zero exit code 125 from /usr/bin/podman container inspect --format {{.State.Status}} ceph-12cb8a96-7d3c-11f0-8741-adfe0268badd-mon-b
2025-08-19 20:38:16,468 7fab9768c740 INFO /usr/bin/podman: stderr Error: no such container ceph-12cb8a96-7d3c-11f0-8741-adfe0268badd-mon-b
2025-08-19 20:38:16,490 7fab9768c740 INFO Non-zero exit code 125 from /usr/bin/podman container inspect --format {{.State.Status}} ceph-12cb8a96-7d3c-11f0-8741-adfe0268badd-mon.b
2025-08-19 20:38:16,490 7fab9768c740 INFO /usr/bin/podman: stderr Error: no such container ceph-12cb8a96-7d3c-11f0-8741-adfe0268badd-mon.b
2025-08-19 20:38:16,490 7fab9768c740 INFO Deploy daemon mon.b ...
2025-08-19 20:38:18,591 7fab9768c740 DEBUG systemctl: stderr Removed "/etc/systemd/system/multi-user.target.wants/ceph.target".