Project

General

Profile

Actions

QA Run #70787

open

aclamk-testing-ganymede-2025-04-02-1944

Added by Adam Kupczyk 12 months ago. Updated 12 months ago.

Status:
QA Approved
Priority:
Normal
Assignee:
-
Git Branch:
Tags (freeform):

Description

https://github.com/ceph/ceph/pull/57448 - os/bluestore: Recompression, part 3. Segmented onode.
https://github.com/ceph/ceph/pull/62224 - os/bluestore: Fast WAL for RocksDB
https://github.com/ceph/ceph/pull/62588 - os/bluestore: Fix race in BlueFS truncate / remove

Actions #1

Updated by Adam Kupczyk 12 months ago ยท Edited

  • Status changed from QA Testing to QA Approved

[[ aclamk-testing-ganymede-2025-04-02-1944 ]]
RUN 1 - 200

[8222473]
rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}}

https://tracker.ceph.com/issues/69803

[8222475]
rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}}

Command failed on smithi002 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://git.ceph.com/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 670c4f70b9adc3af705d1baff2a9a5f403a4774c'

2025-04-03T07:16:03.685 INFO:tasks.workunit.client.0.smithi002.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'...
2025-04-03T07:17:32.808 INFO:tasks.workunit.client.0.smithi002.stderr:Updating files: 90% (12096/13353)^MUpdating files: 91% (12152/13353)^MUpdating files: 92% (12285/13353)^MUpdating files: 93% (12419/13353)^MUpdating files: 94% (12552/13353)^MUpdating files: 95% (12686/13353)^MUpdating files: 96% (12819/13353)^MUpdating files: 97% (12953/13353)^MUpdating files: 98% (13086/13353)^MUpdating files: 99% (13220/13353)^MUpdating files: 100% (13353/13353)^MUpdating files: 100% (13353/13353), done.
2025-04-03T07:17:32.839 DEBUG:teuthology.orchestra.run:got remote process result: 128
2025-04-03T07:17:32.840 INFO:tasks.workunit.client.0.smithi002.stderr:fatal: reference is not a tree: 670c4f70b9adc3af705d1baff2a9a5f403a4774c
2025-04-03T07:17:32.840 ERROR:teuthology.run_tasks:Saw exception from tasks.

[8222484]
rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zstd} tasks/e2e}

Command failed on smithi151 with status 1: 'yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_2'

https://tracker.ceph.com/issues/68668

[8222500]
rados/verify/{centos_latest ceph clusters/fixed-4 d-thrash/none mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/{bluestore/{alloc$/{btree} base compr$/{yes$/{lz4}} mem$/{normal-1} onode-segment$/{512K-onoff} write$/{write_random}}} rados read-affinity/local tasks/mon_recovery validater/valgrind}

Command failed on smithi094 with status 32: 'sync && sudo umount -f /var/lib/ceph/osd/ceph-7'

https://tracker.ceph.com/issues/62713

[8222508]
rados/mgr/{clusters/{2-node-mgr} debug/mgr distro/{ubuntu_latest} mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore/{alloc$/{avl} base compr$/{no$/{no}} mem$/{normal-1} onode-segment$/{256K} write$/{write_random}}} tasks/{1-install 2-ceph 3-mgrmodules 4-units/module_selftest}}

Test failure: test_selftest_command_spam (tasks.mgr.test_module_selftest.TestModuleSelftest)

https://tracker.ceph.com/issues/69494

[8222528]
rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-4} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/{bluestore/{alloc$/{btree} base compr$/{yes$/{snappy}} mem$/{normal-2} onode-segment$/{none} write$/{write_random}}} rados supported-random-distro$/{centos_latest} thrashers/pggrow_host thrashosds-health workloads/rados_api_tests}

Command failed (workunit test rados/test.sh) on smithi167 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=670c4f70b9adc3af705d1baff2a9a5f403a4774c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

2025-04-03T07:55:06.899 INFO:tasks.workunit.client.0.smithi167.stdout: api_watch_notify_pp: /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/20.0.0-1055-g670c4f70/rpm/el9/BUILD/ceph-20.0.0-1055-g670c4f70/src/osdc/Objecter.h: In function 'void Objecter::LingerOp::finished_async()' thread 7fb4ce6ed640 time 2025-04-03T07:55:06.842102+0000
2025-04-03T07:55:06.900 INFO:tasks.workunit.client.0.smithi167.stdout: api_watch_notify_pp: /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/20.0.0-1055-g670c4f70/rpm/el9/BUILD/ceph-20.0.0-1055-g670c4f70/src/osdc/Objecter.h: 2393: FAILED ceph_assert(!watch_pending_async.empty())
2025-04-03T07:55:06.900 INFO:tasks.workunit.client.0.smithi167.stdout: api_watch_notify_pp: ceph version 20.0.0-1055-g670c4f70 (670c4f70b9adc3af705d1baff2a9a5f403a4774c) tentacle (dev)
2025-04-03T07:55:06.900 INFO:tasks.workunit.client.0.smithi167.stdout: api_watch_notify_pp: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x12e) [0x7fb4d198822a]
2025-04-03T07:55:06.900 INFO:tasks.workunit.client.0.smithi167.stdout: api_watch_notify_pp: 2: /usr/lib64/ceph/libceph-common.so.2(0x1e9efd) [0x7fb4d19e9efd]
2025-04-03T07:55:06.900 INFO:tasks.workunit.client.0.smithi167.stdout: api_watch_notify_pp: 3: (boost::asio::detail::executor_op<boost::asio::detail::binder0<CB_DoWatchError>, std::allocator<void>, boost::asio::detail::scheduler_operation>::do_complete(void*, boost::asio::detail::scheduler_operation*, boost::system::error_code const&, unsigned long)+0xbb) [0x7fb4d1e251ab]
2025-04-03T07:55:06.900 INFO:tasks.workunit.client.0.smithi167.stdout: api_watch_notify_pp: 4: /usr/lib64/ceph/libceph-common.so.2(+0x4d6f06) [0x7fb4d1cd6f06]
2025-04-03T07:55:06.900 INFO:tasks.workunit.client.0.smithi167.stdout: api_watch_notify_pp: 5: (boost::asio::detail::executor_op<boost::asio::detail::strand_executor_service::invoker<boost::asio::io_context::basic_executor_type<std::allocator<void>, 0ul> const, void>, std::allocator<void>, boost::asio::detail::scheduler_operation>::do_complete(void*, boost::asio::detail::scheduler_operation*, boost::system::error_code const&, unsigned long)+0xb2) [0x7fb4d1cd7442]
2025-04-03T07:55:06.900 INFO:tasks.workunit.client.0.smithi167.stdout: api_watch_notify_pp: 6: /lib64/librados.so.2(+0xd5ea6) [0x7fb4d22edea6]
2025-04-03T07:55:06.900 INFO:tasks.workunit.client.0.smithi167.stdout: api_watch_notify_pp: 7: /lib64/librados.so.2(+0xca107) [0x7fb4d22e2107]
2025-04-03T07:55:06.900 INFO:tasks.workunit.client.0.smithi167.stdout: api_watch_notify_pp: 8: /lib64/libstdc
+.so.6(+0xdbae4) [0x7fb4d14dbae4]
2025-04-03T07:55:06.900 INFO:tasks.workunit.client.0.smithi167.stdout: api_watch_notify_pp: 9: /lib64/libc.so.6(+0x8a3b2) [0x7fb4d108a3b2]
2025-04-03T07:55:06.901 INFO:tasks.workunit.client.0.smithi167.stdout: api_watch_notify_pp: 10: /lib64/libc.so.6(+0x10f430) [0x7fb4d110f430]

https://tracker.ceph.com/issues/69838

[8222539]
rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/classic task/test_ca_signed_key}

"2025-04-03T08:02:08.949294+0000 mon.a (mon.0) 316 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

OSD stopped by design. Health check cleared soon afterwards.

[8222554]
rados/thrash-old-clients/{0-distro$/{centos_9.stream} 0-size-min-size-overrides/3-size-2-min-size 1-install/squid backoff/normal ceph clusters/{three-plus-one} d-balancer/on mon_election/classic msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/radosbench}

"2025-04-03T08:20:00.000149+0000 mon.a (mon.0) 2763 : cluster [WRN] pg 1.0 is stuck undersized for 2m, current state active+undersized+degraded, last acting [11,6]" in cluster log

Feels like it was recovering ok, but did not finish in time.

[8222558]
rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zstd} tasks/dashboard}

Test failure: test_list_enabled_module (tasks.mgr.dashboard.test_mgr_module.MgrModuleTest)

https://tracker.ceph.com/issues/62972

[8222560]
rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}}

Command crashed: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph_test_admin_socket_output --all'"

https://tracker.ceph.com/issues/70707

[8222564]
rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/classic task/test_cephadm_repos}

Command failed (workunit test cephadm/test_repos.sh) on smithi152 with status 1: 'mkdir p - /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=670c4f70b9adc3af705d1baff2a9a5f403a4774c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh'

Failed to deploy.

[8222632]
rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-snappy} tasks/e2e}

Command failed on smithi102 with status 1: 'yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_2'

https://tracker.ceph.com/issues/68668

[8222640]
rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/classic task/test_rgw_multisite}

"2025-04-03T09:05:20.708468+0000 mon.a (mon.0) 503 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

Becames healthy later.
I will assume it is a fluke.
In any case, its very unlikely it is related to the PRs in testing.

Actions

Also available in: Atom PDF