Bug #71930
closedSegmentation fault caught in Objecter::handle_osd_op_reply() (ERROR: Test api_tier_pp)
0%
Description
/a/yuriw-2025-06-28_18:55:21-rados-wip-yuri-testing-2025-06-28-0812-distro-default-smithi/8355186
2025-06-28T21:37:38.108 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: [ FAILED ] LibRadosTwoPoolsPP.FlushSnap (11412 ms) 2025-06-28T21:37:38.108 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.FlushTryFlushRaces 2025-06-28T21:37:38.108 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/20.3.0-1270-ge9110efd/rpm/el9/BUILD/ceph-20.3.0-1270-ge9110efd/src/test/librados/tier_cxx.cc:2349: Failure 2025-06-28T21:37:38.108 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: Expected equality of these values: 2025-06-28T21:37:38.109 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 0 2025-06-28T21:37:38.109 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: completion->get_return_value() 2025-06-28T21:37:38.109 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: Which is: -22 2025-06-28T21:37:38.109 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 2025-06-28T21:37:38.109 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: [ FAILED ] LibRadosTwoPoolsPP.FlushTryFlushRaces (10128 ms) 2025-06-28T21:37:38.109 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TryFlushReadRace 2025-06-28T21:37:38.109 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/20.3.0-1270-ge9110efd/rpm/el9/BUILD/ceph-20.3.0-1270-ge9110efd/src/test/librados/tier_cxx.cc:2551: Failure 2025-06-28T21:37:38.109 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: Expected equality of these values: 2025-06-28T21:37:38.109 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 0 2025-06-28T21:37:38.110 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: completion->get_return_value() 2025-06-28T21:37:38.110 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: Which is: -22 2025-06-28T21:37:38.110 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 2025-06-28T21:37:38.110 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: *** Caught signal (Segmentation fault) ** 2025-06-28T21:37:38.110 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: in thread 7ffacbfff640 thread_name:msgr-worker-2 2025-06-28T21:37:38.110 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: ceph version 20.3.0-1270-ge9110efd (e9110efd575ab2f14b47cc35e4110fa2b6355764) tentacle (dev - RelWithDebInfo) 2025-06-28T21:37:38.110 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 1: /lib64/libc.so.6(+0x3ea60) [0x7ffad383ea60] 2025-06-28T21:37:38.110 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 2: /lib64/librados.so.2(+0xcafd8) [0x7ffad5293fd8] 2025-06-28T21:37:38.110 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 3: /lib64/librados.so.2(+0xa426a) [0x7ffad526d26a] 2025-06-28T21:37:38.110 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 4: /lib64/librados.so.2(+0x458fd) [0x7ffad520e8fd] 2025-06-28T21:37:38.110 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 5: /usr/lib64/ceph/libceph-common.so.2(+0x784d75) [0x7ffad4d84d75] 2025-06-28T21:37:38.111 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 6: (Objecter::handle_osd_op_reply(MOSDOpReply*)+0x114c) [0x7ffad4c133fc] 2025-06-28T21:37:38.111 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 7: (Objecter::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0x17b) [0x7ffad4c026fb] 2025-06-28T21:37:38.111 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 8: (DispatchQueue::fast_dispatch(boost::intrusive_ptr<Message> const&)+0xfd) [0x7ffad49bb5bd] 2025-06-28T21:37:38.111 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 9: /usr/lib64/ceph/libceph-common.so.2(+0x44e895) [0x7ffad4a4e895] 2025-06-28T21:37:38.111 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 10: (ProtocolV1::handle_message_footer(char*, int)+0xfdb) [0x7ffad4a6e20b] 2025-06-28T21:37:38.111 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 11: /usr/lib64/ceph/libceph-common.so.2(+0x4721ba) [0x7ffad4a721ba] 2025-06-28T21:37:38.111 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 12: (AsyncConnection::process()+0x66b) [0x7ffad4a5a70b] 2025-06-28T21:37:38.111 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 13: (EventCenter::process_events(unsigned int, std::chrono::duration<unsigned long, std::ratio<1l, 1000000000l> >*)+0x1d1) [0x7ffad4a9ef51] 2025-06-28T21:37:38.111 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 14: /usr/lib64/ceph/libceph-common.so.2(+0x49fab6) [0x7ffad4a9fab6] 2025-06-28T21:37:38.112 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 15: /lib64/libstdc++.so.6(+0xdbae4) [0x7ffad3cdbae4] 2025-06-28T21:37:38.112 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 16: /lib64/libc.so.6(+0x8a3b2) [0x7ffad388a3b2] 2025-06-28T21:37:38.112 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 17: /lib64/libc.so.6(+0x10f430) [0x7ffad390f430] 2025-06-28T21:37:38.112 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 2025-06-28T21:37:38.104+0000 7ffacbfff640 -1 *** Caught signal (Segmentation fault) ** 2025-06-28T21:37:38.112 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: in thread 7ffacbfff640 thread_name:msgr-worker-2 2025-06-28T21:37:38.112 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 2025-06-28T21:37:38.112 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: ceph version 20.3.0-1270-ge9110efd (e9110efd575ab2f14b47cc35e4110fa2b6355764) tentacle (dev - RelWithDebInfo) 2025-06-28T21:37:38.112 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 1: /lib64/libc.so.6(+0x3ea60) [0x7ffad383ea60] 2025-06-28T21:37:38.112 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 2: /lib64/librados.so.2(+0xcafd8) [0x7ffad5293fd8] 2025-06-28T21:37:38.112 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 3: /lib64/librados.so.2(+0xa426a) [0x7ffad526d26a] 2025-06-28T21:37:38.113 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 4: /lib64/librados.so.2(+0x458fd) [0x7ffad520e8fd] 2025-06-28T21:37:38.113 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 5: /usr/lib64/ceph/libceph-common.so.2(+0x784d75) [0x7ffad4d84d75] 2025-06-28T21:37:38.113 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 6: (Objecter::handle_osd_op_reply(MOSDOpReply*)+0x114c) [0x7ffad4c133fc] 2025-06-28T21:37:38.113 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 7: (Objecter::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0x17b) [0x7ffad4c026fb] 2025-06-28T21:37:38.113 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 8: (DispatchQueue::fast_dispatch(boost::intrusive_ptr<Message> const&)+0xfd) [0x7ffad49bb5bd] 2025-06-28T21:37:38.113 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 9: /usr/lib64/ceph/libceph-common.so.2(+0x44e895) [0x7ffad4a4e895] 2025-06-28T21:37:38.113 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 10: (ProtocolV1::handle_message_footer(char*, int)+0xfdb) [0x7ffad4a6e20b] 2025-06-28T21:37:38.113 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 11: /usr/lib64/ceph/libceph-common.so.2(+0x4721ba) [0x7ffad4a721ba] 2025-06-28T21:37:38.113 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 12: (AsyncConnection::process()+0x66b) [0x7ffad4a5a70b] 2025-06-28T21:37:38.113 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 13: (EventCenter::process_events(unsigned int, std::chrono::duration<unsigned long, std::ratio<1l, 1000000000l> >*)+0x1d1) [0x7ffad4a9ef51] 2025-06-28T21:37:38.113 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 14: /usr/lib64/ceph/libceph-common.so.2(+0x49fab6) [0x7ffad4a9fab6] 2025-06-28T21:37:38.113 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 15: /lib64/libstdc++.so.6(+0xdbae4) [0x7ffad3cdbae4] 2025-06-28T21:37:38.113 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 16: /lib64/libc.so.6(+0x8a3b2) [0x7ffad388a3b2] 2025-06-28T21:37:38.114 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: 17: /lib64/libc.so.6(+0x10f430) [0x7ffad390f430] 2025-06-28T21:37:38.114 INFO:tasks.workunit.client.0.smithi142.stdout: api_tier_pp: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Coredumps available in /a/yuriw-2025-06-28_18:55:21-rados-wip-yuri-testing-2025-06-28-0812-distro-default-smithi/8355186/remote/smithi142/coredump
Also found in:
/a/yuriw-2025-06-25_18:29:16-rados-wip-yuri-testing-2025-06-25-0715-distro-default-smithi/8349961
Updated by Laura Flores 9 months ago
So far, I've only seen this failure occur within the context of cache tiering. The common facet between all the test descriptions that have this failure is "read-affinity/balance tasks/rados_api_tests", so this might point to cache tiering not interacting well with read-affinity=balance.
Historically, this test has been flaky and has failed from a different issue before we even got to running the `rados/test.sh` workunit (https://tracker.ceph.com/issues/68337). So, it's possible that this issue was obscuring the Objecter segfault.
Updated by Laura Flores 9 months ago
- Assignee set to Nitzan Mordechai
@Nitzan Mordechai can you check out this bug to see if has any relation to the changes in https://github.com/ceph/ceph/pull/63425?
This change to the Objecter code is the most recent I can find. In the original QA run where it was approved (https://tracker.ceph.com/issues/71712), the relevant `read-affinity/balance tasks/rados_api_tests` test failed from something else that has since been fixed: https://pulpito.ceph.com/skanta-2025-06-19_01:59:03-rados-wip-bharath4-testing-2025-06-17-2135-distro-default-smithi/8335911/
Do you think this bug in the Objecter code was obscured? Or is it from a different source?
Updated by Nitzan Mordechai 9 months ago
@Laura Flores it looks like a race condition, i couldn't recreate it yet locally, but it doesn't look related to https://github.com/ceph/ceph/pull/63425
I'll keep watching it, thanks
Updated by Nitzan Mordechai 9 months ago · Edited
client log shows: (/a/yuriw-2025-06-28_18:55:21-rados-wip-yuri-testing-2025-06-28-0812-distro-default-smithi/8355186/remote/smithi142/log/ceph-client.admin.27630.log.gz)
2025-06-28T21:37:38.103+0000 7ffacbfff640 1 -- 172.21.15.142:0/1530860632 <== osd.0 v1:172.21.15.46:6804/3693721550 228 ==== osd_op_reply(4044 foo [stat] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 147+0+0 (unknown 3707749838 0 0) 0x7ffab4002d70 con 0x7ffaac077550 2025-06-28T21:37:38.103+0000 7ffacbfff640 10 client.5183.objecter ms_dispatch2 0x5586f4ab2ce0 osd_op_reply(4044 foo [stat] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) 2025-06-28T21:37:38.103+0000 7ffacbfff640 10 client.5183.objecter in handle_osd_op_reply 2025-06-28T21:37:38.103+0000 7ffacbfff640 7 client.5183.objecter handle_osd_op_reply 4044 ondisk uv 0 in 93.f attempt 0 2025-06-28T21:37:38.103+0000 7ffacbfff640 10 client.5183.objecter op 0 rval 0 len 0 2025-06-28T21:37:38.103+0000 7ffacbfff640 15 client.5183.objecter handle_osd_op_reply completed tid 4044 2025-06-28T21:37:38.103+0000 7ffacbfff640 15 client.5183.objecter _finish_op 4044 2025-06-28T21:37:38.103+0000 7ffacbfff640 20 client.5183.objecter put_session s=0x7ffaac076c60 osd=0 49 2025-06-28T21:37:38.103+0000 7ffacbfff640 15 client.5183.objecter _session_op_remove 0 4044 2025-06-28T21:37:38.103+0000 7ffacbfff640 5 client.5183.objecter 100 in flight 2025-06-28T21:37:38.104+0000 7ffacbfff640 -1 *** Caught signal (Segmentation fault) ** in thread 7ffacbfff640 thread_name:msgr-worker-2 ceph version 20.3.0-1270-ge9110efd (e9110efd575ab2f14b47cc35e4110fa2b6355764) tentacle (dev - RelWithDebInfo) 1: /lib64/libc.so.6(+0x3ea60) [0x7ffad383ea60] 2: /lib64/librados.so.2(+0xcafd8) [0x7ffad5293fd8] 3: /lib64/librados.so.2(+0xa426a) [0x7ffad526d26a] 4: /lib64/librados.so.2(+0x458fd) [0x7ffad520e8fd] 5: /usr/lib64/ceph/libceph-common.so.2(+0x784d75) [0x7ffad4d84d75] 6: (Objecter::handle_osd_op_reply(MOSDOpReply*)+0x114c) [0x7ffad4c133fc] 7: (Objecter::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0x17b) [0x7ffad4c026fb] 8: (DispatchQueue::fast_dispatch(boost::intrusive_ptr<Message> const&)+0xfd) [0x7ffad49bb5bd] 9: /usr/lib64/ceph/libceph-common.so.2(+0x44e895) [0x7ffad4a4e895] 10: (ProtocolV1::handle_message_footer(char*, int)+0xfdb) [0x7ffad4a6e20b] 11: /usr/lib64/ceph/libceph-common.so.2(+0x4721ba) [0x7ffad4a721ba] 12: (AsyncConnection::process()+0x66b) [0x7ffad4a5a70b] 13: (EventCenter::process_events(unsigned int, std::chrono::duration<unsigned long, std::ratio<1l, 1000000000l> >*)+0x1d1) [0x7ffad4a9ef51] 14: /usr/lib64/ceph/libceph-common.so.2(+0x49fab6) [0x7ffad4a9fab6] 15: /lib64/libstdc++.so.6(+0xdbae4) [0x7ffad3cdbae4] 16: /lib64/libc.so.6(+0x8a3b2) [0x7ffad388a3b2] 17: /lib64/libc.so.6(+0x10f430) [0x7ffad390f430] NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
The code in handle_osd_op_reply that was executed was:
ldout(cct, 15) << "handle_osd_op_reply completed tid " << tid << dendl;
_finish_op(op, 0);
ldout(cct, 5) << num_in_flight << " in flight" << dendl;
// serialize completions
if (completion_lock.mutex()) {
completion_lock.lock();
}
sl.unlock();
// do callbacks
if (Op::has_completion(onfinish)) {
if (rc == 0 && handler_error) {
Op::complete(std::move(onfinish), handler_error, -EIO, service.get_executor());
} else if (handler_error) {
Op::complete(std::move(onfinish), handler_error, rc, service.get_executor());
} else {
Op::complete(std::move(onfinish), osdcode(rc), rc, service.get_executor());
}
}
if (completion_lock.mutex()) {
completion_lock.unlock();
}
all the part of do callback is pretty new and added by https://github.com/ceph/ceph/pull/52495 - I think that's the reason we are seeing it.
I'll keep checking it
Updated by Laura Flores 9 months ago
Thanks for confirming @Nitzan Mordechai! Feel free to unassign if you don't have time to work on it- I mainly wanted to confirm the Objecter part with you.
Updated by Nitzan Mordechai 9 months ago · Edited
- Status changed from New to In Progress
Please ignore my previous comment.
The test failure is caused by changes introduced in PR https://github.com/ceph/ceph/pull/62806: we no longer allow an operation that combines the BALANCE_READS and RWORDERED flags. Because the op is rejected with -EINVAL, the object is never created, the read chain enters an infinite loop, and the test eventually segfaults
Updated by Lee Sanders 9 months ago
/a/skanta-2025-07-05_06:21:05-rados-wip-bharath15-testing-2025-07-04-1752-distro-default-smithi/8370839
Updated by Laura Flores 9 months ago
- Has duplicate Bug #72040: workunit test rados/test.sh fails added
Updated by Nitzan Mordechai 8 months ago · Edited
The segfault is a side effect of an intended test failure.
The test writes an object to a tier pool and then initiates a loop of AIO read operations on the cache. During this loop, cache_try_flush is called. This operation now correctly fails due to a check added in PR #62806, which disallows using balance and rwordered flags together.
This intentional failure triggers the test's TearDown routine, which frees the ioctx pointer. However, since the AIO read operation is still in flight, it subsequently attempts to use the freed pointer, leading to a use-after-free segfault.
we no longer maintain cache tier, I'll add PR that will remove that test
Updated by Aishwarya Mathuria 8 months ago
/a/skanta-2025-06-29_15:00:39-rados-wip-bharath1-testing-2025-06-28-2149-distro-default-smithi/8356809
Updated by Radoslaw Zarzynski 8 months ago
- Status changed from In Progress to Fix Under Review
Updated by Connor Fawcett 8 months ago
/a/skanta-2025-07-19_23:59:58-rados-wip-bharath5-testing-2025-07-18-0518-distro-default-smithi/8397505
Updated by Radoslaw Zarzynski 8 months ago
The PR got conflicted and there is one request for extending its scope (see the Kefu's comment).
I'm pretty sure Nitzan will tackle this when he's back.
Updated by Laura Flores 8 months ago
/a/skanta-2025-07-26_06:22:18-rados-wip-bharath9-testing-2025-07-26-0628-distro-default-smithi/8407493
Updated by Laura Flores 8 months ago
/a/yuriw-2025-07-28_23:36:09-rados-tentacle-release-distro-default-smithi/8413608
Updated by Kamoltat (Junior) Sirivadhna 7 months ago
/a/yuriw-2025-07-28_18:11:33-rados-wip-yuri2-testing-2025-07-24-0816-tentacle-distro-default-smithi/8412124
Updated by Kamoltat (Junior) Sirivadhna 7 months ago
- Subject changed from Segmentation fault caught in Objecter::handle_osd_op_reply() to Segmentation fault caught in Objecter::handle_osd_op_reply()
Updated by Kamoltat (Junior) Sirivadhna 7 months ago
- Subject changed from Segmentation fault caught in Objecter::handle_osd_op_reply() to Segmentation fault caught in Objecter::handle_osd_op_reply() (ERROR: Test api_tier_pp)
Updated by Kamoltat (Junior) Sirivadhna 7 months ago
/a/teuthology-2025-08-10_20:00:42-rados-main-distro-default-smithi
['8435067', '8434794', '8435081', '8434941']
Updated by Connor Fawcett 7 months ago
/a/skanta-2025-08-14_03:18:47-rados-wip-bharath4-testing-2025-08-13-0949-tentacle-distro-default-smithi/8442200
Updated by Laura Flores 7 months ago
/a/yuriw-2025-08-14_23:11:43-rados-wip-yuri3-testing-2025-08-14-0737-tentacle-distro-default-smithi/8443886
Updated by Aishwarya Mathuria 7 months ago
/a/skanta-2025-08-14_20:27:05-rados-wip-bharath5-testing-2025-08-13-0959-distro-default-smithi/8443384
Updated by Connor Fawcett 7 months ago
/a/skanta-2025-08-24_15:53:17-rados-wip-bharath4-testing-2025-08-24-0454-distro-default-smithi/8460748
Updated by Sridhar Seshasayee 7 months ago
/a/skanta-2025-08-24_23:24:05-rados-wip-bharath9-testing-2025-08-24-1258-tentacle-distro-default-smithi/8461792
Updated by Aishwarya Mathuria 7 months ago
/a/skanta-2025-08-21_23:24:45-rados-wip-bharath7-testing-2025-08-19-0959-distro-default-smithi/8457142
Updated by Connor Fawcett 7 months ago
/a/skanta-2025-08-31_23:44:30-rados-wip-bharath4-testing-2025-08-31-1138-distro-default-smithi/8474709
Updated by Jonathan Bailey 7 months ago
/a/skanta-2025-08-05_23:48:19-rados-wip-bharath1-testing-2025-08-05-0512-distro-default-smithi/8427210
Updated by Jonathan Bailey 7 months ago
/a/skanta-2025-08-28_03:20:37-rados-wip-bharath1-testing-2025-08-26-1433-distro-default-smithi/8467898
/a/skanta-2025-08-27_01:46:19-rados-wip-bharath1-testing-2025-08-26-1433-distro-default-smithi/8466331
Updated by Upkeep Bot 6 months ago
- Status changed from Fix Under Review to Pending Backport
- Merge Commit set to 62bcf65e8c0995783bb3e368909716346874ad62
- Fixed In set to v20.3.0-2957-g62bcf65e8c
- Upkeep Timestamp set to 2025-09-11T17:49:56+00:00
Updated by Upkeep Bot 6 months ago
- Copied to Backport #72996: tentacle: Segmentation fault caught in Objecter::handle_osd_op_reply() (ERROR: Test api_tier_pp) added
Updated by Connor Fawcett 6 months ago
/a/yuriw-2025-09-06_15:55:33-rados-wip-yuri3-testing-2025-09-04-1437-tentacle-distro-default-smithi/8484407
Updated by Laura Flores 6 months ago
/a/yuriw-2025-09-15_20:16:05-rados-wip-yuri-testing-2025-09-15-1029-tentacle-distro-default-smithi/8501759
Updated by Upkeep Bot 6 months ago
- Status changed from Pending Backport to Resolved
- Upkeep Timestamp changed from 2025-09-11T17:49:56+00:00 to 2025-10-04T01:01:05+00:00
Updated by Laura Flores 5 months ago
/a/yuriw-2025-10-15_20:55:26-rados-tentacle-release-distro-default-smithi/8553987