Update libmultiprocess subtree to be more stable with rust IPC client#34422
Update libmultiprocess subtree to be more stable with rust IPC client#34422fanquake merged 4 commits intobitcoin:masterfrom
Conversation
Upcoming libmultiprocess changes are expected to alter this behavior (bitcoin#34250 (comment)), making test coverage useful for documenting current behavior and validating the intended changes.
|
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. Code Coverage & BenchmarksFor details see: https://corecheck.dev/bitcoin/bitcoin/pulls/34422. ReviewsSee the guideline for information on the review process.
If your review is incorrectly listed, please copy-paste ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
|
🚧 At least one of the CI tasks failed. HintsTry to run the tests locally, according to the documentation. However, a CI failure may still
Leave a comment here, if you need help tracking down a confusing failure. |
08576a1 to
5148573
Compare
|
Updated 08576a1 -> 5148573 ( Rebased 5148573 -> 556eba0 ( Rebased 556eba0 -> ccd1831 ( |
5148573 to
556eba0
Compare
556eba0 to
ccd1831
Compare
|
https://github.com/bitcoin/bitcoin/actions/runs/22145238258/job/64020426758?pr=34422#step:11:3803: node0 stderr =================================================================
==25289==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 144 byte(s) in 1 object(s) allocated from:
#0 0x6234b1fbf6b1 in operator new(unsigned long) (/home/admin/actions-runner/_work/_temp/build/bin/bitcoin-node+0x129a6b1) (BuildId: 5aa568da87c0a0ff1a7f751ab78d286ddd8c5aa3)
#1 0x6234b247300e in std::__detail::_MakeUniq<node::(anonymous namespace)::BlockTemplateImpl>::__single_object std::make_unique<node::(anonymous namespace)::BlockTemplateImpl, node::BlockAssembler::Options const&, std::unique_ptr<node::CBlockTemplate, std::default_delete<node::CBlockTemplate>>, node::NodeContext&>(node::BlockAssembler::Options const&, std::unique_ptr<node::CBlockTemplate, std::default_delete<node::CBlockTemplate>>&&, node::NodeContext&) /usr/lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/bits/unique_ptr.h:1070:30
#2 0x6234b247300e in node::(anonymous namespace)::BlockTemplateImpl::waitNext(node::BlockWaitOptions) /home/admin/actions-runner/_work/_temp/src/node/interfaces.cpp:927:34
#3 0x6234b2ebf961 in decltype(auto) mp::ProxyMethodTraits<ipc::capnp::messages::BlockTemplate::WaitNextParams, void>::invoke<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, node::BlockWaitOptions>(mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, node::BlockWaitOptions&&) /home/admin/actions-runner/_work/_temp/src/ipc/libmultiprocess/include/mp/proxy.h:289:16
#4 0x6234b2ebf961 in decltype(auto) mp::ServerCall::invoke<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, node::BlockWaitOptions>(mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, mp::TypeList<>, node::BlockWaitOptions&&) const /home/admin/actions-runner/_work/_temp/src/ipc/libmultiprocess/include/mp/proxy-types.h:465:16
#5 0x6234b2ebe9cc in void mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>::invoke<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, node::BlockWaitOptions>(mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, mp::TypeList<>, node::BlockWaitOptions&&) const /home/admin/actions-runner/_work/_temp/src/ipc/libmultiprocess/include/mp/proxy-types.h:488:33
#6 0x6234b2ebe686 in void mp::PassField<mp::Accessor<mp::mining_fields::Options, 17>, node::BlockWaitOptions, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall> const&, mp::TypeList<>>(mp::Priority<0>, mp::TypeList<node::BlockWaitOptions>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall> const&, mp::TypeList<>&&) /home/admin/actions-runner/_work/_temp/src/ipc/libmultiprocess/include/mp/proxy-types.h:307:8
#7 0x6234b2ebdfaa in decltype(auto) mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>>::invoke<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, mp::TypeList<node::BlockWaitOptions>>(mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, mp::TypeList<node::BlockWaitOptions>) const /home/admin/actions-runner/_work/_temp/src/ipc/libmultiprocess/include/mp/proxy-types.h:556:16
#8 0x6234b2ebdfaa in std::enable_if<std::is_same<decltype(mp::Accessor<mp::mining_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::mining_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>>, mp::TypeList<node::BlockWaitOptions>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>> const&, mp::TypeList<node::BlockWaitOptions>&&)::'lambda'(mp::CancelMonitor&)::operator()(mp::CancelMonitor&)::'lambda1'()::operator()() const /home/admin/actions-runner/_work/_temp/src/ipc/libmultiprocess/include/mp/type-context.h:172:28
#9 0x6234b2ebbd4a in kj::Maybe<kj::Exception> kj::runCatchingExceptions<std::enable_if<std::is_same<decltype(mp::Accessor<mp::mining_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::mining_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>>, mp::TypeList<node::BlockWaitOptions>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>> const&, mp::TypeList<node::BlockWaitOptions>&&)::'lambda'(mp::CancelMonitor&)::operator()(mp::CancelMonitor&)::'lambda1'()>(mp::Accessor<mp::mining_fields::Context, 17>&&) /usr/include/kj/exception.h:371:5
#10 0x6234b2ebabf3 in std::enable_if<std::is_same<decltype(mp::Accessor<mp::mining_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::mining_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>>, mp::TypeList<node::BlockWaitOptions>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>> const&, mp::TypeList<node::BlockWaitOptions>&&)::'lambda'(mp::CancelMonitor&)::operator()(mp::CancelMonitor&) /home/admin/actions-runner/_work/_temp/src/ipc/libmultiprocess/include/mp/type-context.h:172:28
#11 0x6234b2eb8400 in kj::Promise<mp::Accessor<mp::mining_fields::Context, 17>> mp::ProxyServer<mp::Thread>::post<capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>, std::enable_if<std::is_same<decltype(mp::Accessor<mp::mining_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::mining_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>>, mp::TypeList<node::BlockWaitOptions>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>> const&, mp::TypeList<node::BlockWaitOptions>&&)::'lambda'(mp::CancelMonitor&)>(mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&&)::'lambda'()::operator()()::'lambda'()::operator()()::'lambda0'()::operator()() const /home/admin/actions-runner/_work/_temp/src/ipc/libmultiprocess/include/mp/proxy-io.h:742:100
#12 0x6234b2eb8400 in kj::Maybe<kj::Exception> kj::runCatchingExceptions<kj::Promise<mp::Accessor<mp::mining_fields::Context, 17>> mp::ProxyServer<mp::Thread>::post<capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>, std::enable_if<std::is_same<decltype(mp::Accessor<mp::mining_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::mining_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>>, mp::TypeList<node::BlockWaitOptions>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>> const&, mp::TypeList<node::BlockWaitOptions>&&)::'lambda'(mp::CancelMonitor&)>(mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&&)::'lambda'()::operator()()::'lambda'()::operator()()::'lambda0'()>(mp::Accessor<mp::mining_fields::Context, 17>&&) /usr/include/kj/exception.h:371:5
#13 0x6234b2eb77e8 in kj::Promise<mp::Accessor<mp::mining_fields::Context, 17>> mp::ProxyServer<mp::Thread>::post<capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>, std::enable_if<std::is_same<decltype(mp::Accessor<mp::mining_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::mining_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>>, mp::TypeList<node::BlockWaitOptions>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>> const&, mp::TypeList<node::BlockWaitOptions>&&)::'lambda'(mp::CancelMonitor&)>(mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&&)::'lambda'()::operator()()::'lambda'()::operator()() /home/admin/actions-runner/_work/_temp/src/ipc/libmultiprocess/include/mp/proxy-io.h:742:48
#14 0x6234b2b3b21f in kj::Function<void ()>::operator()() /usr/include/kj/function.h:119:12
#15 0x6234b2b3b21f in void mp::Unlock<mp::Lock, kj::Function<void ()>&>(mp::Lock&, kj::Function<void ()>&) /home/admin/actions-runner/_work/_temp/src/ipc/libmultiprocess/include/mp/util.h:210:5
#16 0x728f0baeadb3 (/lib/x86_64-linux-gnu/libstdc++.so.6+0xecdb3) (BuildId: 753c6c8608b61d4e67be8f0c890e03e0aa046b8b)
#17 0x6234b1f78daa in asan_thread_start(void*) crtstuff.c
Indirect leak of 336 byte(s) in 1 object(s) allocated from:
#0 0x6234b1fbf6b1 in operator new(unsigned long) (/home/admin/actions-runner/_work/_temp/build/bin/bitcoin-node+0x129a6b1) (BuildId: 5aa568da87c0a0ff1a7f751ab78d286ddd8c5aa3)
#1 0x6234b249325c in node::BlockAssembler::CreateNewBlock() /home/admin/actions-runner/_work/_temp/src/node/miner.cpp:128:26
#2 0x6234b2498d2b in node::WaitAndCreateNewBlock(ChainstateManager&, node::KernelNotifications&, CTxMemPool*, std::unique_ptr<node::CBlockTemplate, std::default_delete<node::CBlockTemplate>> const&, node::BlockWaitOptions const&, node::BlockAssembler::Options const&, bool&) /home/admin/actions-runner/_work/_temp/src/node/miner.cpp:428:32
#3 0x6234b2472fbd in node::(anonymous namespace)::BlockTemplateImpl::waitNext(node::BlockWaitOptions) /home/admin/actions-runner/_work/_temp/src/node/interfaces.cpp:926:29
#4 0x6234b2ebf961 in decltype(auto) mp::ProxyMethodTraits<ipc::capnp::messages::BlockTemplate::WaitNextParams, void>::invoke<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, node::BlockWaitOptions>(mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, node::BlockWaitOptions&&) /home/admin/actions-runner/_work/_temp/src/ipc/libmultiprocess/include/mp/proxy.h:289:16
#5 0x6234b2ebf961 in decltype(auto) mp::ServerCall::invoke<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, node::BlockWaitOptions>(mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, mp::TypeList<>, node::BlockWaitOptions&&) const /home/admin/actions-runner/_work/_temp/src/ipc/libmultiprocess/include/mp/proxy-types.h:465:16
#6 0x6234b2ebe9cc in void mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>::invoke<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, node::BlockWaitOptions>(mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, mp::TypeList<>, node::BlockWaitOptions&&) const /home/admin/actions-runner/_work/_temp/src/ipc/libmultiprocess/include/mp/proxy-types.h:488:33
#7 0x6234b2ebe686 in void mp::PassField<mp::Accessor<mp::mining_fields::Options, 17>, node::BlockWaitOptions, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall> const&, mp::TypeList<>>(mp::Priority<0>, mp::TypeList<node::BlockWaitOptions>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall> const&, mp::TypeList<>&&) /home/admin/actions-runner/_work/_temp/src/ipc/libmultiprocess/include/mp/proxy-types.h:307:8
#8 0x6234b2ebdfaa in decltype(auto) mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>>::invoke<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, mp::TypeList<node::BlockWaitOptions>>(mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, mp::TypeList<node::BlockWaitOptions>) const /home/admin/actions-runner/_work/_temp/src/ipc/libmultiprocess/include/mp/proxy-types.h:556:16
#9 0x6234b2ebdfaa in std::enable_if<std::is_same<decltype(mp::Accessor<mp::mining_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::mining_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>>, mp::TypeList<node::BlockWaitOptions>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>> const&, mp::TypeList<node::BlockWaitOptions>&&)::'lambda'(mp::CancelMonitor&)::operator()(mp::CancelMonitor&)::'lambda1'()::operator()() const /home/admin/actions-runner/_work/_temp/src/ipc/libmultiprocess/include/mp/type-context.h:172:28
#10 0x6234b2ebbd4a in kj::Maybe<kj::Exception> kj::runCatchingExceptions<std::enable_if<std::is_same<decltype(mp::Accessor<mp::mining_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::mining_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>>, mp::TypeList<node::BlockWaitOptions>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>> const&, mp::TypeList<node::BlockWaitOptions>&&)::'lambda'(mp::CancelMonitor&)::operator()(mp::CancelMonitor&)::'lambda1'()>(mp::Accessor<mp::mining_fields::Context, 17>&&) /usr/include/kj/exception.h:371:5
#11 0x6234b2ebabf3 in std::enable_if<std::is_same<decltype(mp::Accessor<mp::mining_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::mining_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>>, mp::TypeList<node::BlockWaitOptions>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>> const&, mp::TypeList<node::BlockWaitOptions>&&)::'lambda'(mp::CancelMonitor&)::operator()(mp::CancelMonitor&) /home/admin/actions-runner/_work/_temp/src/ipc/libmultiprocess/include/mp/type-context.h:172:28
#12 0x6234b2eb8400 in kj::Promise<mp::Accessor<mp::mining_fields::Context, 17>> mp::ProxyServer<mp::Thread>::post<capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>, std::enable_if<std::is_same<decltype(mp::Accessor<mp::mining_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::mining_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>>, mp::TypeList<node::BlockWaitOptions>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>> const&, mp::TypeList<node::BlockWaitOptions>&&)::'lambda'(mp::CancelMonitor&)>(mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&&)::'lambda'()::operator()()::'lambda'()::operator()()::'lambda0'()::operator()() const /home/admin/actions-runner/_work/_temp/src/ipc/libmultiprocess/include/mp/proxy-io.h:742:100
#13 0x6234b2eb8400 in kj::Maybe<kj::Exception> kj::runCatchingExceptions<kj::Promise<mp::Accessor<mp::mining_fields::Context, 17>> mp::ProxyServer<mp::Thread>::post<capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>, std::enable_if<std::is_same<decltype(mp::Accessor<mp::mining_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::mining_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>>, mp::TypeList<node::BlockWaitOptions>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>> const&, mp::TypeList<node::BlockWaitOptions>&&)::'lambda'(mp::CancelMonitor&)>(mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&&)::'lambda'()::operator()()::'lambda'()::operator()()::'lambda0'()>(mp::Accessor<mp::mining_fields::Context, 17>&&) /usr/include/kj/exception.h:371:5
#14 0x6234b2eb77e8 in kj::Promise<mp::Accessor<mp::mining_fields::Context, 17>> mp::ProxyServer<mp::Thread>::post<capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>, std::enable_if<std::is_same<decltype(mp::Accessor<mp::mining_fields::Context, 17>::get(fp1.call_context.getParams())), mp::Context::Reader>::value, kj::Promise<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>::CallContext>>::type mp::PassField<mp::Accessor<mp::mining_fields::Context, 17>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>>, mp::TypeList<node::BlockWaitOptions>>(mp::Priority<1>, mp::TypeList<>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&, mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>> const&, mp::TypeList<node::BlockWaitOptions>&&)::'lambda'(mp::CancelMonitor&)>(mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults>>&&)::'lambda'()::operator()()::'lambda'()::operator()() /home/admin/actions-runner/_work/_temp/src/ipc/libmultiprocess/include/mp/proxy-io.h:742:48
#15 0x6234b2b3b21f in kj::Function<void ()>::operator()() /usr/include/kj/function.h:119:12
#16 0x6234b2b3b21f in void mp::Unlock<mp::Lock, kj::Function<void ()>&>(mp::Lock&, kj::Function<void ()>&) /home/admin/actions-runner/_work/_temp/src/ipc/libmultiprocess/include/mp/util.h:210:5
#17 0x728f0baeadb3 (/lib/x86_64-linux-gnu/libstdc++.so.6+0xecdb3) (BuildId: 753c6c8608b61d4e67be8f0c890e03e0aa046b8b)
#18 0x6234b1f78daa in asan_thread_start(void*) crtstuff.c
Indirect leak of 144 byte(s) in 1 object(s) allocated from:
#0 0x6234b1fbf6b1 in operator new(unsigned long) (/home/admin/actions-runner/_work/_temp/build/bin/bitcoin-node+0x129a6b1) (BuildId: 5aa568da87c0a0ff1a7f751ab78d286ddd8c5aa3)
#1 0x6234b24a21e9 in std::__new_allocator<std::_Sp_counted_ptr_inplace<CTransaction const, std::allocator<void>, (__gnu_cxx::_Lock_policy)2>>::allocate(unsigned long, void const*) /usr/lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/bits/new_allocator.h:151:27
#2 0x6234b24a21e9 in std::allocator<std::_Sp_counted_ptr_inplace<CTransaction const, std::allocator<void>, (__gnu_cxx::_Lock_policy)2>>::allocate(unsigned long) /usr/lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/bits/allocator.h:198:32
#3 0x6234b24a21e9 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<CTransaction const, std::allocator<void>, (__gnu_cxx::_Lock_policy)2>>>::allocate(std::allocator<std::_Sp_counted_ptr_inplace<CTransaction const, std::allocator<void>, (__gnu_cxx::_Lock_policy)2>>&, unsigned long) /usr/lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/bits/alloc_traits.h:482:20
#2 0x6234b2308b66 in std::allocator<std::shared_ptr<CTransaction const>>::allocate(unsigned long) /usr/lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/bits/allocator.h:198:32
#3 0x6234b2308b66 in std::allocator_traits<std::allocator<std::shared_ptr<CTransaction const>>>::allocate(std::allocator<std::shared_ptr<CTransaction const>>&, unsigned long) /usr/lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/bits/alloc_traits.h:482:20
#4 0x6234b2308b66 in std::_Vector_base<std::shared_ptr<CTransaction const>, std::allocator<std::shared_ptr<CTransaction const>>>::_M_allocate(unsigned long) /usr/lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/bits/stl_vector.h:381:20
#5 0x6234b2308b66 in void std::vector<std::shared_ptr<CTransaction const>, std::allocator<std::shared_ptr<CTransaction const>>>::_M_realloc_insert<>(__gnu_cxx::__normal_iterator<std::shared_ptr<CTransaction const>*, std::vector<std::shared_ptr<CTransaction const>, std::allocator<std::shared_ptr<CTransaction const>>>>) /usr/lib/gcc/x86_64-linux-gnu/13/../../../../include/c++/13/bits/vector.tcc:459:33
SUMMARY: AddressSanitizer: 1066 byte(s) leaked in 12 allocation(s).
|
|
Spent some time debugging the memory leak detected by ASAN in latest push: ccd1831, https://github.com/bitcoin/bitcoin/actions/runs/22145238258/job/64020426758?pr=34422#step:11:3803 The leak is caused by a known compiler bug in clang, not present in GCC: llvm/llvm-project#12658 It comes from this line throwing an exception inside a KJ_DEFER call: Which causes the function return value (a 'unique_ptr` in this case) to be leaked and never destroyed. And example of the bug can be seen with https://godbolt.org/z/Y5YcYsdYK. If this example is compiled with any version of clang, objects are leaked, and if compiled with any version of GCC, they are not leaked. Since the clang bug is unlikely to be fixed, I'll need to rewrite the function to avoid using KJ_DEFER. |
|
I assume bitcoin-core/libmultiprocess#210 wouldn't make a difference? If not, maybe the issue should be tracked in capnproto/capnproto. |
4294e95 to
90c82a5
Compare
|
Rebased ccd1831 -> 90c82a5 ( re: #34422 (comment)
I forgot about |
|
|
Well, ok. Hopefuly I just did something stupid and this isn't another compiler bug. |
test/functional/interface_ipc.py
Outdated
| with node.assert_debug_log(expected_msgs=["BlockTemplate.waitNext", "IPC server post request"]): | ||
| promise = template.waitNext(ctx, waitoptions) | ||
| await asyncio.sleep(0.1) | ||
| disconnected_log_check.enter_context(node.assert_debug_log(expected_msgs=["IPC server: socket disconnected", "canceled while executing"])) |
There was a problem hiding this comment.
Using a context that silently polls the debug log until a hidden timeout is hit, is brittle and confusing.
This CI will fail here:
AssertionError: [node 0] Expected message(s) ['IPC server: socket disconnected', 'canceled while executing'] not found in log:
- 2026-02-19T12:51:18.300546Z [capnp-loop] [ipc/capnp/protocol.cpp:53] [IpcLogFn] ipc: {bitcoin-node-25248/b-capnp-loop-25252} IPC server destroy N2mp11ProxyServerIN3ipc5capnp8messages6MiningEEE
node0 2026-02-19T12:51:18.300746Z [capnp-loop] [ipc/capnp/protocol.cpp:53] [IpcLogFn] ipc: {bitcoin-node-25248/b-capnp-loop-25252} IPC server: socket disconnected.
node0 2026-02-19T12:51:18.300774Z [capnp-loop] [ipc/capnp/protocol.cpp:53] [IpcLogFn] ipc: {bitcoin-node-25248/b-capnp-loop-25252} IPC server request #14 canceled while executing.
node0 2026-02-19T12:51:18.300807Z [capnp-loop] [ipc/capnp/protocol.cpp:53] [IpcLogFn] ipc: {bitcoin-node-25248/b-capnp-loop-25252} IPC server destroy N2mp11ProxyServerIN3ipc5capnp8messages4InitEEE
test 2026-02-19T12:51:18.301543Z TestFramework (DEBUG): Closing down network thread
For a quick and dirty fix, you can set timeout=2 (or whatever). However, my recommendation would be to wait on a "real" condition in the inner context. e.g
with assert_debug_log([disconnect_msg]):
initiate_disconnect()
self.wait_until(disconnect)There was a problem hiding this comment.
re: #34422 (comment)
Using a context that silently polls the debug log until a hidden timeout is hit, is brittle and confusing.
Wow, thank you noticing this and identifying the problem! But I don't think this is right.
There is literally no timeout specified on this line of code, nor should there be. The assert_debug_log function did not take a timeout option when it was introduced in #14024 and tests should not need to hardcode timeout values every time they take an action that does not complete immediately.
I think test framework should provide some default timeout controlling how long tests are allowed to wait idly before they are considered failed. Specifically, it would seem nice to replace the rpc_timeout variable with a generic failure_timeout variable, and also stop multiplying it by timeout_factor up front, instead applying the factor when RPC calls are made. This is probably better discussed elsewhere but I feel like the recent change in #34581 adding timeout=2 values everywhere is not the best long term approach.
For now, though it is probably easiest to add timeout=2 here as you suggest to fix the silent conflict with #34581.
For a quick and dirty fix, you can set
timeout=2(or whatever). However, my recommendation would be to wait on a "real" condition in the inner context. e.gwith assert_debug_log([disconnect_msg]): initiate_disconnect() self.wait_until(disconnect)
This suggestion doesn't actually make sense. The test is calling waitNext, then disconnecting the IPC client, then waiting for the server to detect that the IPC client has disconnected, then it is calling generate.
The point of the disconnected_log_check is not to make sure something is logged, but to delay the generate call after until the server processes the client disconnect, to make the test deterministic and be sure it is checking the right thing.
There was a problem hiding this comment.
There is literally no timeout specified on this line of code, nor should there be. The
assert_debug_logfunction did not take a timeout option when it was introduced in #14024 and tests should not need to hardcode timeout values every time they take an action that does not complete immediately.
I think back then the timeout was 0 and the function was synchronous and immediate without any loop/sleep/wait.
I think test framework should provide some default timeout controlling how long tests are allowed to wait idly before they are considered failed. Specifically, it would seem nice to replace the
rpc_timeoutvariable with a genericfailure_timeoutvariable, and also stop multiplying it bytimeout_factorup front, instead applying the factor when RPC calls are made. This is probably better discussed elsewhere but I feel like the recent change in #34581 addingtimeout=2values everywhere is not the best long term approach.
24581 does not add timeout values everywhere. In fact, it removed many unneeded ones. The goal is to clearly differentiate between the two cases:
- The debug log is read and checked exactly once, immediately, without any loop/sleep/wait, and the result is documented to be reported back immediately by failing the test or continuing after the context is closed.
- The debug log is read continuously in a loop, until a specified timeout. The result is documented to be reported at or before the timeout by failing the test or continuing after the context is closed successfully.
I think the two cases are sufficiently different, with the first case being preferable (if possible), to make the intention of context visible by just glancing over the line. Otherwise, this is going to lead to more of the issues that were fixed in #34571
This suggestion doesn't actually make sense. The test is calling
waitNext, then disconnecting the IPC client, then waiting for the server to detect that the IPC client has disconnected, then it is callinggenerate.
I see. It was mostly a general recommendation to prefer case (1). However, if the debug log truly is the only way to synchronise on, then case (2) is appropriate. Though, I think it should properly be documented as this case.
There was a problem hiding this comment.
I think back then the timeout was 0 and the function was synchronous and immediate without any loop/sleep/wait.
This is surprising, and thanks for the explanation. I was thinking a 0 timeout was not useful because the client doesn't have control over when messages appear in the log file and always needs to wait for them to appear. But it seems like 0 does actually work in most cases, so maybe it is a good default.
24581 does not add timeout values everywhere. In fact, it removed many unneeded ones. The goal is to clearly differentiate between the two cases:
The debug log is read and checked exactly once, immediately, without any loop/sleep/wait, and the result is documented to be reported back immediately by failing the test or continuing after the context is closed.
The debug log is read continuously in a loop, until a specified timeout. The result is documented to be reported at or before the timeout by failing the test or continuing after the context is closed successfully.
I'm surprised case 1 works or is generally useful. In this PR and #34284, assert_debug_log is used to wait until events happen, not to check log file formatting. But I guess this PR is the outlier case.
I still think the change implemented in #34581 adding manytimeout=2 arguments is not good. It would seem better to distinguish the two cases with a wait=False default argument and if wait=True is specified, have the test wait some fixed amount of time, like 60 seconds (multiplied by timeout factor), before failing. If tests want to specify some other amount of time to wait, that could be allowed, but it should rarely be necessary. I don't think it should be necessary hardcode a specific timeout value in every call that is trying to wait.
There was a problem hiding this comment.
I still think the change implemented in #34581 adding many
timeout=2arguments is not good. It would seem better to distinguish the two cases with await=Falsedefault argument and ifwait=Trueis specified, have the test wait some fixed amount of time, like 60 seconds (multiplied by timeout factor), before failing. If tests want to specify some other amount of time to wait, that could be allowed, but it should rarely be necessary. I don't think it should be necessary hardcode a specific timeout value in every call that is trying to wait.
Maybe assert_debug_log could be ranamed to assert_debug_log_poll and a new assert_debug_log alias with the timeout arg removed could be added. This way, the assert_debug_log_poll can have a default timeout again.
There was a problem hiding this comment.
Maybe
assert_debug_logcould be ranamed toassert_debug_log_polland a newassert_debug_logalias with the timeout arg removed could be added. This way, theassert_debug_log_pollcan have a default timeout again.
This feels more awkward than just having one function, dropping the timeout parameter, and adding a wait=False|True|<number of seconds> parameter as suggested. Having two functions instead of one makes functionality less discoverable, and makes it likely more suboptimal tests will be written because you can't see all the functionality provided in one place. Having two names to remember instead of one, and to not confuse with each other when reading and writing tests also adds unnecessary difficulty.
There was a problem hiding this comment.
dropping the
timeoutparameter, and adding await=False|True|<number of seconds>parameter
Was thinking about this, but I think it is fine to leave this as-is, because that will just change all the timeout=2 args to wait=True. However, code readers may wonder how long a typical wait is. I think it is fine to say timeout=2 to indicate something that waits for a short time only, and timeout=60 to indicate a different case for something that waits longer.
90c82a5 to
c51b2fd
Compare
|
Rebased 90c82a5 -> c51b2fd ( |
| self.start_node(0) | ||
| if os.environ.get("CONTAINER_NAME") == "ci_mac_native": | ||
| # Current macos CI job throws a different, vague error. | ||
| # This seems to be a problem specific to the CI job and does |
There was a problem hiding this comment.
I'm not sure about this workaround. I have a macOS machine where I build with REDUCE_EXPORTS=ON, and if this was merged, that would have a failing functional test:
Traceback (most recent call last):
File "/Users/xxx/bitcoin/build/test/functional/interface_ipc_mining.py", line 285, in async_routine
await mining.createNewBlock(ctx, opts)
capnp.lib.capnp.KjException: (remote):0: failed: remote exception: unknown non-KJ exception of type: kj::Exception
stack: 10ade92e4 10add2bd0 10ab09d08 10ab0ae08
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/xxx/bitcoin/test/functional/test_framework/test_framework.py", line 142, in main
self.run_test()
~~~~~~~~~~~~~^^
File "/Users/xxx/bitcoin/build/test/functional/interface_ipc_mining.py", line 381, in run_test
self.run_ipc_option_override_test()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/Users/xxx/bitcoin/build/test/functional/interface_ipc_mining.py", line 302, in run_ipc_option_override_test
asyncio.run(capnp.run(async_routine()))
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/.pyenv/versions/3.14.3/lib/python3.14/asyncio/runners.py", line 204, in run
return runner.run(main)
~~~~~~~~~~^^^^^^
File "/Users/xxx/.pyenv/versions/3.14.3/lib/python3.14/asyncio/runners.py", line 127, in run
return self._loop.run_until_complete(task)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "/Users/xxx/.pyenv/versions/3.14.3/lib/python3.14/asyncio/base_events.py", line 719, in run_until_complete
return future.result()
~~~~~~~~~~~~~^^
File "capnp/lib/capnp.pyx", line 2083, in run
File "capnp/lib/capnp.pyx", line 2084, in capnp.lib.capnp.run
return await coro
File "/Users/xxx/bitcoin/build/test/functional/interface_ipc_mining.py", line 299, in async_routine
assert_equal(e.description, "remote exception: std::exception: block_reserved_weight (0) must be at least 2000 weight units")
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/bitcoin/test/functional/test_framework/util.py", line 80, in assert_equal
raise AssertionError("not(%s)" % " == ".join(str(arg) for arg in (thing1, thing2) + args))
AssertionError: not(remote exception: unknown non-KJ exception of type: kj::Exception == remote exception: std::exception: block_reserved_weight (0) must be at least 2000 weight units)
2026-02-27T11:17:30.335593Z TestFramework (INFO): Not stopping nodes as test failed. The dangling processes will be cleaned up later.
This also means anyone wanting to build on macOS, with the same configuration as we build releases, would have a failing test. That seems like something that should be fixed, or if there are undiagnosed bugs, we could disable the functionality for macOS, until someone investigates.
There was a problem hiding this comment.
re: #34422 (comment)
I build with
REDUCE_EXPORTS=ON, and if this was merged, that would have a failing functional test:
Thanks! Previously we only knew that dropping REDUCE_EXPORTS=ON would make the macos CI job pass but we didn't know that adding it would make other macos builds fail, so it makes sense to generalize the workaround and this is done in the latest push
31b0a40 to
d16709c
Compare
There was a problem hiding this comment.
Updated 31b0a40 -> d16709c (pr/subtree-8.13 -> pr/subtree-8.14, compare) generalizing ci_mac_native workaround to avoid failures with other macos REDUCE_EXPORTS builds
Updated d16709c -> 8fe91f3 (pr/subtree-8.14 -> pr/subtree-8.15, compare) to fix CI error https://github.com/bitcoin/bitcoin/actions/runs/22487913679/job/65143941250?pr=34422#step:11:3277 (#34581 strikes again)
| self.start_node(0) | ||
| if os.environ.get("CONTAINER_NAME") == "ci_mac_native": | ||
| # Current macos CI job throws a different, vague error. | ||
| # This seems to be a problem specific to the CI job and does |
There was a problem hiding this comment.
re: #34422 (comment)
I build with
REDUCE_EXPORTS=ON, and if this was merged, that would have a failing functional test:
Thanks! Previously we only knew that dropping REDUCE_EXPORTS=ON would make the macos CI job pass but we didn't know that adding it would make other macos builds fail, so it makes sense to generalize the workaround and this is done in the latest push
d16709c to
8fe91f3
Compare
|
re-ACK 8fe91f3 Updated subtree since my last review and adjusted the "macOS + REDUCE_EXPORTS" handling in the test. |
janb84
left a comment
There was a problem hiding this comment.
reACK 8fe91f3
changes since last ack:
- changes regarding macos
REDUCE_EXPORTS=ONerrors. - subtree updates for CI errors.
Pre-update fails with REDUCE_EXPORTS=ON :
Details
wallet_migration.py | ○ Skipped | 0 s
interface_ipc_mining.py | ✖ Failed | 7 s
ALL | ✖ Failed | 1816 s (accumulated)After changes, tests correctly :
Details
interface_ipc_mining.py | ✓ Passed | 7 s
ALL | ✓ Passed | 1752 s (accumulated)|
Added this to 31.0 milestone since I think it makes sense to include, but not very sure about this, so someone can correct me if this isn't the case |
|
This is a bit late, but I think given it's scoped to (experimental) IPC, and fixes known issues, this is ok. I would like to see the macOS issue followed up on, cause that seems odd. Can that be tracked in an issue somewhere? |
Created #34723 to track this |
b1638ac doc: Bump version 8 > 9 (Ryan Ofsky) Pull request description: Increase version number after bitcoin core update bitcoin/bitcoin#34422. Also update version history and documentation. ACKs for top commit: Sjors: ACK b1638ac Tree-SHA512: 2d99993740bf2e8b5cee298e4b945cac66182483fe03e91611e144e0332bcd9200c6f53e1b2d9369bb57a31f590c84df662cd61ec123fdc5a8abcd1ad1b016d8
Includes:
The main change is bitcoin-core/libmultiprocess#240 which fixes issues with asynchronous requests (#33923) and unclean disconnects (#34250) that happen with the rust mining client. It also adds tests for these fixes which had some previous review in #34284 (that PR was closed to simplify dependencies between PRs).
The changes can be verified by running
test/lint/git-subtree-check.sh src/ipc/libmultiprocessas described in developer notes and lint instructionsResolves #33923 and #34250