QA Run #71514
closedwip-vshankar-testing-20250613.134551-debug
Updated by Venky Shankar 10 months ago
- Subject changed from wip-vshankar-testing-20250531.174644-debug to wip-vshankar-testing-20250601.092319-debug
- Description updated (diff)
- Shaman Build changed from wip-vshankar-testing-20250531.174644-debug to wip-vshankar-testing-20250601.092319-debug
- QA Runs changed from wip-vshankar-testing-20250531.174644-debug to wip-vshankar-testing-20250601.092319-debug
- Git Branch changed from ceph/ceph-ci/commits/testing/wip-vshankar-testing-20250531.174644-debug to ceph/ceph-ci/commits/testing/wip-vshankar-testing-20250601.092319-debug
Updated by Venky Shankar 10 months ago
- Status changed from QA Testing to QA Building
Rebuilding after including another client side fix (for priority).
Updated by Venky Shankar 10 months ago
- Subject changed from wip-vshankar-testing-20250601.092319-debug to wip-vshankar-testing-20250602.035232-debug
- Shaman Build changed from wip-vshankar-testing-20250601.092319-debug to wip-vshankar-testing-20250602.035232-debug
- QA Runs changed from wip-vshankar-testing-20250601.092319-debug to wip-vshankar-testing-20250602.035232-debug
- Git Branch changed from ceph/ceph-ci/commits/testing/wip-vshankar-testing-20250601.092319-debug to ceph/ceph-ci/commits/testing/wip-vshankar-testing-20250602.035232-debug
Updated by Venky Shankar 10 months ago
Blocked due to https://tracker.ceph.com/issues/71572
Updated by Venky Shankar 9 months ago
- Subject changed from wip-vshankar-testing-20250602.035232-debug to wip-vshankar-testing-20250610.034655-debug
- Description updated (diff)
- Shaman Build changed from wip-vshankar-testing-20250602.035232-debug to wip-vshankar-testing-20250610.034655-debug
- QA Runs changed from wip-vshankar-testing-20250602.035232-debug to wip-vshankar-testing-20250610.034655-debug
- Git Branch changed from ceph/ceph-ci/commits/testing/wip-vshankar-testing-20250602.035232-debug to ceph/ceph-ci/commits/testing/wip-vshankar-testing-20250610.034655-debug
Updated by Venky Shankar 9 months ago
Rebuilding including another client side fix.
Updated by Venky Shankar 9 months ago
PR https://github.com/ceph/ceph/pull/63636 received some more comments. Addressed those and rebuilding.
Updated by Venky Shankar 9 months ago
- Subject changed from wip-vshankar-testing-20250610.034655-debug to wip-vshankar-testing-20250613.134551-debug
- Shaman Build changed from wip-vshankar-testing-20250610.034655-debug to wip-vshankar-testing-20250613.134551-debug
- QA Runs changed from wip-vshankar-testing-20250610.034655-debug to wip-vshankar-testing-20250613.134551-debug
- Git Branch changed from ceph/ceph-ci/commits/testing/wip-vshankar-testing-20250610.034655-debug to ceph/ceph-ci/commits/testing/wip-vshankar-testing-20250613.134551-debug
Updated by Venky Shankar 9 months ago
Wow - new failures have cropped up related to cephadm. See: https://pulpito.ceph.com/vshankar-2025-06-13_17:03:06-fs-wip-vshankar-testing-20250613.134551-debug-testing-default-smithi/8327011/
2025-06-14T21:57:14.877 INFO:journalctl@ceph.mon.a.smithi017.stdout: /usr/bin/podman: stderr RuntimeError: Error: ['lsblk: vg_nvme/lv_4: not a block device'] 2025-06-14T21:57:14.877 INFO:journalctl@ceph.mon.a.smithi017.stdout: Traceback (most recent call last): 2025-06-14T21:57:14.877 INFO:journalctl@ceph.mon.a.smithi017.stdout: File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main 2025-06-14T21:57:14.877 INFO:journalctl@ceph.mon.a.smithi017.stdout: return _run_code(code, main_globals, None, 2025-06-14T21:57:14.877 INFO:journalctl@ceph.mon.a.smithi017.stdout: File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code 2025-06-14T21:57:14.877 INFO:journalctl@ceph.mon.a.smithi017.stdout: exec(code, run_globals) 2025-06-14T21:57:14.877 INFO:journalctl@ceph.mon.a.smithi017.stdout: File "/tmp/tmpff540w0x.cephadm.build/app/__main__.py", line 5274, in <module> 2025-06-14T21:57:14.877 INFO:journalctl@ceph.mon.a.smithi017.stdout: File "/tmp/tmpff540w0x.cephadm.build/app/__main__.py", line 5262, in main 2025-06-14T21:57:14.877 INFO:journalctl@ceph.mon.a.smithi017.stdout: File "/tmp/tmpff540w0x.cephadm.build/app/__main__.py", line 397, in _infer_config 2025-06-14T21:57:14.877 INFO:journalctl@ceph.mon.a.smithi017.stdout: File "/tmp/tmpff540w0x.cephadm.build/app/__main__.py", line 312, in _infer_fsid 2025-06-14T21:57:14.877 INFO:journalctl@ceph.mon.a.smithi017.stdout: File "/tmp/tmpff540w0x.cephadm.build/app/__main__.py", line 425, in _infer_image 2025-06-14T21:57:14.877 INFO:journalctl@ceph.mon.a.smithi017.stdout: File "/tmp/tmpff540w0x.cephadm.build/app/__main__.py", line 299, in _validate_fsid 2025-06-14T21:57:14.877 INFO:journalctl@ceph.mon.a.smithi017.stdout: File "/tmp/tmpff540w0x.cephadm.build/app/__main__.py", line 3223, in command_ceph_volume 2025-06-14T21:57:14.878 INFO:journalctl@ceph.mon.a.smithi017.stdout: File "/tmp/tmpff540w0x.cephadm.build/app/cephadmlib/call_wrappers.py", line 310, in call_throws 2025-06-14T21:57:14.878 INFO:journalctl@ceph.mon.a.smithi017.stdout: RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph@sha256:d18b19e1a24d3b4ee6fa955c0800fbbc344720262c7cdf54dc31a23edd40595e -e NODE_NAME=smithi017 -e CEPH_VOLUME_OSDSPEC_AFFINITY=default -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/daacdd5a-4969-11f0-870c-adfe0268badd:/var/run/ceph:z -v /var/log/ceph/daacdd5a-4969-11f0-870c-adfe0268badd:/var/log/ceph:z -v /var/lib/ceph/daacdd5a-4969-11f0-870c-adfe0268badd/crash:/var/lib/ceph/crash:z -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/daacdd5a-4969-11f0-870c-adfe0268badd/selinux:/sys/fs/selinux:ro -v /:/rootfs:rslave -v /etc/hosts:/etc/hosts:ro -v /tmp/ceph-tmpuhkj498e:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpuh97pbm8:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph@sha256:d18b19e1a24d3b4ee6fa955c0800fbbc344720262c7cdf54dc31a23edd40595e lvm batch --no-auto vg_nvme/lv_4 --yes --no-systemd 2025-06-14T21:57:14.878 INFO:journalctl@ceph.mon.a.smithi017.stdout:Jun 14 21:57:14 smithi017 ceph-mon[37531]: pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2025-06-14T21:57:14.878 INFO:journalctl@ceph.mon.a.smithi017.stdout:Jun 14 21:57:14 smithi017 ceph-mon[37531]: Health check failed: Failed to apply 1 service(s): osd.default (CEPHADM_APPLY_SPEC_FAIL)
Updated by Venky Shankar 9 months ago
- Status changed from QA Building to QA Needs Approval
I'm going to review this run even if it's really messy due to unrelated failures. However, I will mark the backport as deferred till we get a clean run after the issue is resolved.
Updated by Venky Shankar 9 months ago
Venky Shankar wrote in #note-11:
I'm going to review this run even if it's really messy due to unrelated failures. However, I will mark the backport as deferred till we get a clean run after the issue is resolved.
The jobs have picked up crimson flavour for OSD and is causing lots of failures. Not sure how did that happen. This is under resolution and we are talking to crimson team. Standby!
Updated by Venky Shankar 9 months ago
- Status changed from QA Needs Approval to QA Approved
Updated by Venky Shankar 9 months ago
- Status changed from QA Approved to QA Closed
https://github.com/ceph/ceph/pull/63636 has received more comment and needs fix for windows build. Dropping.
Rest merged.