Skip to content

HashJoin respects max_joined_block_size_rows#56996

Merged
alexey-milovidov merged 6 commits intomasterfrom
vdimir/hash_join_max_block_size
Dec 27, 2023
Merged

HashJoin respects max_joined_block_size_rows#56996
alexey-milovidov merged 6 commits intomasterfrom
vdimir/hash_join_max_block_size

Conversation

@vdimir
Copy link
Copy Markdown
Member

@vdimir vdimir commented Nov 20, 2023

Changelog category (leave one):

  • Performance Improvement

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

  • HashJoin respects setting max_joined_block_size_rows and do not produce large blocks for ALL JOIN

Ref #54662

@robot-ch-test-poll4 robot-ch-test-poll4 added the pr-performance Pull request with some performance improvements label Nov 20, 2023
@robot-ch-test-poll4
Copy link
Copy Markdown
Contributor

robot-ch-test-poll4 commented Nov 20, 2023

This is an automated comment for commit 9b13705 with description of existing statuses. It's updated for the latest CI running

❌ Click here to open a full report in a separate page

Successful checks
Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help✅ success
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR✅ success
ClickBenchRuns [ClickBench](https://github.com/ClickHouse/ClickBench/) with instant-attach table✅ success
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process✅ success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help✅ success
Docker image for serversThe check to build and optionally push the mentioned image to docker hub✅ success
Docs checkThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here✅ success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc✅ success
Install packagesChecks that the built packages are installable in a clear environment✅ success
Mergeable CheckChecks if all other necessary checks are successful✅ success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests✅ success
SQLTestThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
SQLancerFuzzing tests that detect logical bugs with SQLancer tool✅ success
SqllogicRun clickhouse on the sqllogic test set against sqlite and checks that all statements are passed✅ success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors✅ success
Style CheckRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report✅ success
Unit testsRuns the unit tests for different release types✅ success
Check nameDescriptionStatus
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests❌ failure
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts❌ failure

@liuneng1994
Copy link
Copy Markdown
Contributor

The code only controls the size of the columns in the left table. In the use case I designed, there will be more columns in the right table. These columns will be generated at once during the join, using a lot of memory. When the right table has more columns (>40), the performance here also has a big problem. This may be the reason why it is slower than my version. The expansion of the right table columns can also cause OOM.

What is more difficult to handle is that a row of data in the left table may be associated with tens of thousands of records, because the data may be unbalanced. It is necessary to record the processing status when matching the hash map, or do lazy materialization

@vdimir
Copy link
Copy Markdown
Member Author

vdimir commented Nov 21, 2023

@liuneng1994

With current approach we will insert number of rows equals to the number of matches. But it should not be a problem unless it is smaller than block size.

Imagine we have block size 65409 and each row in left table have 500 matches, like in perf test. So we will generate a block 65500 and it should not be an issue. The problem can be when number of matches is more than half of max block size. Do you mean that problem or there's something different?

@vdimir vdimir force-pushed the vdimir/hash_join_max_block_size branch from 2f8fe35 to 7d07196 Compare November 21, 2023 13:37
@liuneng1994
Copy link
Copy Markdown
Contributor

liuneng1994 commented Nov 22, 2023

With current approach we will insert number of rows equals to the number of matches. But it should not be a problem unless it is smaller than block size.

Sorry, I misunderstood. The memory issue can basically be solved.

An extreme case. There is a row that matches a lot of records(> max_joined_block_rows ) and needs to output a lot of columns. This situation may still not be solved. In other words, max_joined_block_rows is not the maximum output size in the strict sense and can be exceeded in theory. In my code is to be able to strictly guarantee the maximum output size


bool has_required_right_keys = (required_right_keys.columns() != 0);
added_columns.need_filter = join_features.need_filter || has_required_right_keys;
added_columns.max_joined_block_rows = table_join->maxJoinedBlockRows();
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if AddedColumns::columns has pre-allocated memory

@baibaichen
Copy link
Copy Markdown
Contributor

With current approach we will insert number of rows equals to the number of matches. But it should not be a problem unless it is smaller than block size.

Imagine we have block size 65409 and each row in left table have 500 matches, like in perf test. So we will generate a block 65500 and it should not be an issue. The problem can be when number of matches is more than half of max block size. Do you mean that problem or there's something different?

@vdimir neng's case doesn‘t include such situation, but his implmentation already solved this issue, i.e., max_joined_block_size_rows will be respected in any case.

As for performance, we guess it would be related with whether memory is pre-allocated or not. see neng's comment on

added_columns.max_joined_block_rows = table_join->maxJoinedBlockRows();

But, it is just a guess, probably perf can find something diffierent.

@vdimir vdimir force-pushed the vdimir/hash_join_max_block_size branch from 190a72d to 5779753 Compare November 30, 2023 10:01
@zhanglistar
Copy link
Copy Markdown
Contributor

@vdimir Any update on this PR? Thanks.

@vdimir vdimir force-pushed the vdimir/hash_join_max_block_size branch from 5779753 to 8d9fa3c Compare December 6, 2023 11:25
@vdimir
Copy link
Copy Markdown
Member Author

vdimir commented Dec 6, 2023

Requested review from random people from the team, maybe someone will take a look.

@devcrafter devcrafter self-assigned this Dec 6, 2023
@baibaichen
Copy link
Copy Markdown
Contributor

baibaichen commented Dec 6, 2023

Requested review from random people from the team, maybe someone will take a look.

How's the performance compared to #54662?

@vdimir
Copy link
Copy Markdown
Member Author

vdimir commented Dec 6, 2023

How's the performance compared to #54662?

Unfortunately a bit worse than in that PR, but it has improvements comparing to master. I still cannot wrap my head around what makes crucial difference, because the idea essentially the same. However, I assume it still worth merging this PR first, since it's quite small change and then try to realize if we can make it faster.

if (unlikely(current_offset > max_joined_block_rows))
{
added_columns.offsets_to_replicate->resize_assume_reserved(i);
added_columns.filter.resize_assume_reserved(i);
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we also check if constexpr (need_filter) ?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to ensure that it's reserved

@liuneng1994
Copy link
Copy Markdown
Contributor

@vdimir I found the reason for the slowness. After this PR is merged, I will submit a new PR to optimize performance.

@vdimir
Copy link
Copy Markdown
Member Author

vdimir commented Dec 15, 2023

I found the reason for the slowness. After this PR is merged, I will submit a new PR to optimize performance.

Sounds good! (@devcrafter)

@devcrafter
Copy link
Copy Markdown
Member

I found the reason for the slowness. After this PR is merged, I will submit a new PR to optimize performance.

Sounds good! (@devcrafter)

@vdimir Looking into it, but please fix the build

@vdimir vdimir force-pushed the vdimir/hash_join_max_block_size branch 2 times, most recently from 9e10ee7 to 51eec7a Compare December 19, 2023 12:23
@vdimir
Copy link
Copy Markdown
Member Author

vdimir commented Dec 20, 2023

CI failures hardly related, but still pretty interesting:

Stress test (debug) — Check timeout expired Details

2023.12.19 15:57:34.457840 [ 547 ] {} <Fatal> Application: Child process was terminated by signal 9 (KILL). If it is not done by 'forcestop' command or manually, the possible cause is OOM Killer (see 'dmesg' and look at the '/var/log/kern.log' for the details).

Stress test (tsan) — Hung check failed, possible deadlock found (see hung_check.log) Details

2023-12-19 16:48:05,220 Checking if some queries hung
Using queries from '/usr/share/clickhouse-test/queries' directory
Connecting to ClickHouse server... OK
Received exception from server (version 23.12.1):
Code: 219. DB::Exception: Received from localhost:9000. DB::Exception: New table appeared in database being dropped or detached. Try again.. (DATABASE_NOT_EMPTY)
(query: DETACH DATABASE db01802)
Traceback (most recent call last):
  File "/usr/bin/stress", line 374, in <module>
    main()
  File "/usr/bin/stress", line 359, in main
    res = call(cmd, shell=True, stdout=tee.stdin, stderr=STDOUT, timeout=600)
  File "/usr/lib/python3.10/subprocess.py", line 347, in call
    return p.wait(timeout=timeout)
  File "/usr/lib/python3.10/subprocess.py", line 1209, in wait
    return self._wait(timeout=timeout)
  File "/usr/lib/python3.10/subprocess.py", line 1951, in _wait
    raise TimeoutExpired(self.args, timeout)
subprocess.TimeoutExpired: Command '/usr/bin/clickhouse-test --client-option max_untracked_memory=1Gi max_memory_usage_for_user=0 memory_profiler_step=1Gi --database=system --hung-check --report-logs-stats 00001_select_1' timed out after 600 seconds

Upgrade check (msan) — check_status.tsv doesn't exists Details

MemorySanitizer: use-of-uninitialized-value
==231025==WARNING: MemorySanitizer: use-of-uninitialized-value
    #0 0x5646cfdc5fdd in DB::ReplicatedMergeTreeAttachThread::run() build_docker/./src/Storages/MergeTree/ReplicatedMergeTreeAttachThread.cpp:83:9
    #1 0x5646cfdcfc7e in DB::ReplicatedMergeTreeAttachThread::ReplicatedMergeTreeAttachThread(DB::StorageReplicatedMergeTree&)::$_0::operator()() const build_docker/./src/Storages/MergeTree/ReplicatedMergeTreeAttachThread.cpp:24:82
    #2 0x5646cfdcfc7e in decltype(std::declval<DB::ReplicatedMergeTreeAttachThread::ReplicatedMergeTreeAttachThread(DB::StorageReplicatedMergeTree&)::$_0&>()()) std::__1::__invoke[abi:v15000]<DB::ReplicatedMergeTreeAttachThread::ReplicatedMergeTreeAttachThread(DB::StorageReplicatedMergeTree&)::$_0&>(DB::ReplicatedMergeTreeAttachThread::ReplicatedMergeTreeAttachThread(DB::StorageReplicatedMergeTree&)::$_0&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
    #3 0x5646cfdcfc7e in void std::__1::__invoke_void_return_wrapper<void, true>::__call<DB::ReplicatedMergeTreeAttachThread::ReplicatedMergeTreeAttachThread(DB::StorageReplicatedMergeTree&)::$_0&>(DB::ReplicatedMergeTreeAttachThread::ReplicatedMergeTreeAttachThread(DB::StorageReplicatedMergeTree&)::$_0&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:479:9
...

Member fields were destroyed
    #0 0x56469d600d1d in __sanitizer_dtor_callback_fields (/usr/bin/clickhouse+0x7c86d1d) (BuildId: b02ef26d094f99621fd60a2d973ca865dffde192)
    #1 0x5646cfdc4177 in DB::ReplicatedMergeTreeAttachThread::~ReplicatedMergeTreeAttachThread() build_docker/./src/Storages/MergeTree/ReplicatedMergeTreeAttachThread.h:37:20
    #2 0x5646cfdc4177 in DB::ReplicatedMergeTreeAttachThread::~ReplicatedMergeTreeAttachThread() build_docker/./src/Storages/MergeTree/ReplicatedMergeTreeAttachThread.cpp:32:1
    #3 0x5646ce6f1e5d in std::__1::__optional_destruct_base<DB::ReplicatedMergeTreeAttachThread, false>::~__optional_destruct_base[abi:v15000]() build_docker/./contrib/llvm-project/libcxx/include/optional:261:21
    #4 0x5646ce6f1e5d in DB::StorageReplicatedMergeTree::~StorageReplicatedMergeTree() build_docker/./src/Storages/StorageReplicatedMergeTree.cpp:5267:1
...

https://s3.amazonaws.com/clickhouse-test-reports/56996/51eec7a185e0ebd15363b4ceb3a981903815ebf1/upgrade_check__msan_/stderr.log

Upgrade check (tsan) — check_status.tsv doesn't exists Details

ThreadSanitizer: data race
WARNING: ThreadSanitizer: data race (pid=1018668)
  Read of size 8 at 0x7b7800c6ea28 by main thread:
    #0 std::__1::shared_ptr<DB::BackgroundSchedulePoolTaskInfo>::get[abi:v15000]() const build_docker/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:801:16 (clickhouse+0x194f74c8) (BuildId: eb6f1638a1dca515120703137057aa43dcfd8685)
    #1 std::__1::shared_ptr<DB::BackgroundSchedulePoolTaskInfo>::operator bool[abi:v15000]() const build_docker/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:833:16 (clickhouse+0x194f74c8)
    #2 bool std::__1::operator!=[abi:v15000]<DB::BackgroundSchedulePoolTaskInfo>(std::__1::shared_ptr<DB::BackgroundSchedulePoolTaskInfo> const&, std::nullptr_t) build_docker/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:1254:30 (clickhouse+0x194f74c8)
    #3 DB::BackgroundSchedulePoolTaskHolder::operator bool() const build_docker/./src/Core/BackgroundSchedulePool.h:173:55 (clickhouse+0x194f74c8)
    #4 DB::BackgroundJobsAssignee::finish() build_docker/./src/Storages/MergeTree/BackgroundJobsAssignee.cpp:111:9 (clickhouse+0x194f74c8)
    #5 DB::StorageMergeTree::shutdown(bool) build_docker/./src/Storages/StorageMergeTree.cpp:190:36 (clickhouse+0x19a6e56b) (BuildId: eb6f1638a1dca515120703137057aa43dcfd8685)
    #6 DB::IStorage::flushAndShutdown(bool) build_docker/./src/Storages/IStorage.h:573:9 (clickhouse+0x1726ed4d) (BuildId: eb6f1638a1dca515120703137057aa43dcfd8685)
    #7 DB::DatabaseWithOwnTablesBase::shutdown() build_docker/./src/Databases/DatabasesCommon.cpp:311:20 (clickhouse+0x1726ed4d)
    #8 DB::DatabaseOnDisk::shutdown() build_docker/./src/Databases/DatabaseOnDisk.cpp:169:32 (clickhouse+0x171f83cf) (BuildId: eb6f1638a1dca515120703137057aa43dcfd8685)
    #9 DB::DatabaseCatalog::shutdownImpl() build_docker/./src/Interpreters/DatabaseCatalog.cpp:265:26 (clickhouse+0x175a0e20) (BuildId: eb6f1638a1dca515120703137057aa43dcfd8685)
    #10 DB::DatabaseCatalog::shutdown() build_docker/./src/Interpreters/DatabaseCatalog.cpp:863:27 (clickhouse+0x175a9dab) (BuildId: eb6f1638a1dca515120703137057aa43dcfd8685)
    #11 DB::ContextSharedPart::shutdown() build_docker/./src/Interpreters/Context.cpp:567:9 (clickhouse+0x174f7495) (BuildId: eb6f1638a1dca515120703137057aa43dcfd8685)
    #12 DB::Context::shutdown() build_docker/./src/Interpreters/Context.cpp:4172:13 (clickhouse+0x174e6721) (BuildId: eb6f1638a1dca515120703137057aa43dcfd8685)
...

Previous write of size 8 at 0x7b7800c6ea28 by thread T795 (mutexes: write M0):
    #0 std::__1::enable_if<is_move_constructible<DB::BackgroundSchedulePoolTaskInfo*>::value && is_move_assignable<DB::BackgroundSchedulePoolTaskInfo*>::value, void>::type std::__1::swap[abi:v15000]<DB::BackgroundSchedulePoolTaskInfo*>(DB::BackgroundSchedulePoolTaskInfo*&, DB::BackgroundSchedulePoolTaskInfo*&) build_docker/./contrib/llvm-project/libcxx/include/__utility/swap.h:37:7 (clickhouse+0x194f71a8) (BuildId: eb6f1638a1dca515120703137057aa43dcfd8685)
    #1 std::__1::shared_ptr<DB::BackgroundSchedulePoolTaskInfo>::swap[abi:v15000](std::__1::shared_ptr<DB::BackgroundSchedulePoolTaskInfo>&) build_docker/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:761:9 (clickhouse+0x194f71a8)
    #2 std::__1::shared_ptr<DB::BackgroundSchedulePoolTaskInfo>::operator=[abi:v15000](std::__1::shared_ptr<DB::BackgroundSchedulePoolTaskInfo>&&) build_docker/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:723:38 (clickhouse+0x194f71a8)
    #3 DB::BackgroundSchedulePoolTaskHolder::operator=(DB::BackgroundSchedulePoolTaskHolder&&) build_docker/./src/Core/BackgroundSchedulePool.h:165:110 (clickhouse+0x194f71a8)
    #4 DB::BackgroundJobsAssignee::start() build_docker/./src/Storages/MergeTree/BackgroundJobsAssignee.cpp:103:16 (clickhouse+0x194f71a8)
    #5 DB::StorageMergeTree::startup() build_docker/./src/Storages/StorageMergeTree.cpp:151:40 (clickhouse+0x19a6e293) (BuildId: eb6f1638a1dca515120703137057aa43dcfd8685)
    #6 DB::DatabaseOrdinary::startupTableAsync(DB::AsyncLoader&, std::__1::unordered_set<std::__1::shared_ptr<DB::LoadJob>, std::__1::hash<std::__1::shared_ptr<DB::LoadJob>>, std::__1::equal_to<std::__1::shared_ptr<DB::LoadJob>>, std::__1::allocator<std::__1::shared_ptr<DB::LoadJob>>>, DB::QualifiedTableName const&, DB::LoadingStrictnessLevel)::$_0::operator()(DB::AsyncLoader&, std::__1::shared_ptr<DB::LoadJob> const&) const build_docker/./src/Databases/DatabaseOrdinary.cpp:212:24 (clickhouse+0x17222fd2) (BuildId: eb6f1638a1dca515120703137057aa43dcfd8685)
    #7 decltype(std::declval<DB::DatabaseOrdinary::startupTableAsync
...

https://s3.amazonaws.com/clickhouse-test-reports/56996/51eec7a185e0ebd15363b4ceb3a981903815ebf1/upgrade_check__tsan_/stderr.log

@vdimir vdimir force-pushed the vdimir/hash_join_max_block_size branch from 51eec7a to 9b13705 Compare December 22, 2023 15:52
@vdimir
Copy link
Copy Markdown
Member Author

vdimir commented Dec 22, 2023

Rebased one more time
@devcrafter

@alexey-milovidov alexey-milovidov self-assigned this Dec 27, 2023
@alexey-milovidov alexey-milovidov merged commit 0e678fb into master Dec 27, 2023
@alexey-milovidov alexey-milovidov deleted the vdimir/hash_join_max_block_size branch December 27, 2023 14:46
Copy link
Copy Markdown
Member

@devcrafter devcrafter left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I was late

if (!data)
{
LOG_TRACE(log, "({}) Join data has been already released", fmt::ptr(this));
LOG_TRACE(log, "{}Join data has been already released", instance_log_id);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be more convenient to make instance_log_id part of logger name

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logger name normally corresponds to component name, but not to particular component instance, imo. So, not sure if we should make logger name dynamic

auto inner_hash_join = std::make_shared<InternalHashJoin>();
inner_hash_join->data = std::make_unique<HashJoin>(table_join_, right_sample_block, any_take_last_row_);

inner_hash_join->data = std::make_unique<HashJoin>(table_join_, right_sample_block, any_take_last_row_, 0, fmt::format("concurrent{}", i));
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-> "concurrent_{}"

left_sample_block = sample_block.cloneEmpty();
output_sample_block = left_sample_block.cloneEmpty();
ExtraBlockPtr not_processed;
ExtraBlockPtr not_processed = nullptr;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not necessary to initialize with nullptr

size_t i = 0;
for (; i < rows; ++i)
{
if constexpr (join_features.need_replication)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vdimir you've explained what need_replicaion means at the time, but I've already forgotten and there is no comment. Let's add it next to the need_replication definition.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added comments and changed resize_assume_reserved -> resize #58289

{
if (unlikely(current_offset > max_joined_block_rows))
{
added_columns.offsets_to_replicate->resize_assume_reserved(i);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do we ensure that offsets_to_replicate is not nullptr and the size is reserved?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we initialize it if join_features.need_replication = true

if constexpr (join_features.need_replication)
added_columns.offsets_to_replicate = std::make_unique<IColumn::Offsets>(rows);

And use it only when flag is enabled

if (unlikely(current_offset > max_joined_block_rows))
{
added_columns.offsets_to_replicate->resize_assume_reserved(i);
added_columns.filter.resize_assume_reserved(i);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to ensure that it's reserved

vdimir added a commit that referenced this pull request Dec 28, 2023
vdimir added a commit that referenced this pull request Jul 15, 2024
vdimir added a commit that referenced this pull request Jul 15, 2024
vdimir added a commit that referenced this pull request Aug 12, 2024
github-merge-queue bot pushed a commit that referenced this pull request Aug 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

pr-performance Pull request with some performance improvements

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants