Skip to content

Fix incorrect distinguishment between *Cluster and Distributed for object storage#85734

Merged
thevar1able merged 7 commits intomasterfrom
auto-cluster-engines-fixes
Aug 18, 2025
Merged

Fix incorrect distinguishment between *Cluster and Distributed for object storage#85734
thevar1able merged 7 commits intomasterfrom
auto-cluster-engines-fixes

Conversation

@thevar1able
Copy link
Copy Markdown
Member

@thevar1able thevar1able commented Aug 16, 2025

Changelog category (leave one):

  • Critical Bug Fix (crash, data loss, RBAC) or LOGICAL_ERROR

Changelog entry (a user-readable short description of the changes that goes into CHANGELOG.md):

Using distributed_depth as an indicator of *Cluster function was incorrect and may lead to data duplication; use client_info.collaborate_with_initiator instead.

Reverts #85359

@thevar1able thevar1able added the ci-functional-test ci with functional test jobs only label Aug 16, 2025
@clickhouse-gh
Copy link
Copy Markdown
Contributor

clickhouse-gh bot commented Aug 16, 2025

Workflow [PR], commit [e557e1e]

Summary:

job_name test_name status info comment
Stateless tests (amd_binary, old analyzer, s3 storage, DatabaseReplicated, parallel) failure
03144_aggregate_states_with_different_types FAIL
00210_insert_select_extremes_http FAIL
Exception in test runner FAIL
Killed by signal (in clickhouse-server.log or clickhouse-server.err.log) FAIL
Fatal messages (in clickhouse-server.log or clickhouse-server.err.log) FAIL
Bugfix validation (functional tests) failure

@clickhouse-gh clickhouse-gh bot added pr-critical-bugfix pr-must-backport Pull request should be backported intentionally. Use this label with great care! pr-must-backport-cloud labels Aug 16, 2025
@thevar1able thevar1able removed the ci-functional-test ci with functional test jobs only label Aug 16, 2025
@thevar1able thevar1able marked this pull request as ready for review August 16, 2025 15:59
@thevar1able

This comment was marked as outdated.

@thevar1able thevar1able changed the title Another attempt at sanitizing auto cluster table functions Fix incorrect distinguishment between *Cluster and Distributed for object Aug 16, 2025
@thevar1able thevar1able changed the title Fix incorrect distinguishment between *Cluster and Distributed for object Fix incorrect distinguishment between *Cluster and Distributed for object storage Aug 16, 2025
@thevar1able
Copy link
Copy Markdown
Member Author

Don't mind Bugfix validation failing, I imported a test from another PR which is already green on master.

@thevar1able thevar1able added this pull request to the merge queue Aug 18, 2025
Merged via the queue into master with commit 7a723db Aug 18, 2025
233 of 240 checks passed
@thevar1able thevar1able deleted the auto-cluster-engines-fixes branch August 18, 2025 20:20
@robot-ch-test-poll2 robot-ch-test-poll2 added the pr-synced-to-cloud The PR is synced to the cloud repo label Aug 18, 2025
@robot-clickhouse-ci-1 robot-clickhouse-ci-1 added pr-backports-created-cloud deprecated label, NOOP pr-must-backport-synced The `*-must-backport` labels are synced into the cloud Sync PR labels Aug 18, 2025
@nikitamikhaylov
Copy link
Copy Markdown
Member

@thevar1able @KochetovNicolai The test https://s3.amazonaws.com/clickhouse-test-reports/json.html?PR=85734&sha=e557e1e9244fe1488d64f00f8cbd63d564bdb374&name_0=PR in this PR failed with:

clickhouse-server.err.log:2025.08.18 20:46:02.657175 [ 711559 ] {8d4f2e5a-5031-40da-a4d3-ffc4cc3d2714} <Fatal> : Logical error: 'Next task callback is not set for query '.
clickhouse-server.err.log:2025.08.18 20:46:02.678480 [ 711559 ] {8d4f2e5a-5031-40da-a4d3-ffc4cc3d2714} <Fatal> : Stack trace (when copying this message, always include the lines below):
clickhouse-server.err.log:2025.08.18 20:46:02.681081 [ 1638 ] {} <Fatal> BaseDaemon: ########## Short fault info ############
clickhouse-server.err.log:2025.08.18 20:46:02.681096 [ 1638 ] {} <Fatal> BaseDaemon: (version 25.8.1.1, build id: 225E305DEB4FDD6AD34E048240C045562E19D515, git hash: 6b9fc46fb13dfc0c7cfa5eda649010ff65d617f6, architecture: x86_64) (from thread 711559) Received signal 6
clickhouse-server.err.log:2025.08.18 20:46:02.681110 [ 1638 ] {} <Fatal> BaseDaemon: Signal description: Aborted
clickhouse-server.err.log:2025.08.18 20:46:02.681114 [ 1638 ] {} <Fatal> BaseDaemon: 
clickhouse-server.err.log:2025.08.18 20:46:02.681137 [ 1638 ] {} <Fatal> BaseDaemon: Stack trace: 0x00007f33faec39fd 0x00007f33fae6f476 0x00007f33fae557f3 0x000055af223b7153 0x000055af223b79ac 0x000055af223b7c6c 0x000055af1b9307ce 0x000055af1b930100 0x000055af1b92fd2b 0x000055af28285d65 0x000055af26a94716 0x000055af2c5a9b4f 0x000055af2c5a8e21 0x000055af2c4da0cc 0x000055af2c50b384 0x000055af284e0697 0x000055af2846e6c1 0x000055af28473180 0x000055af2843bdbd 0x000055af28434bd8 0x000055af2843152b 0x000055af2843e3dc 0x000055af287fff4e 0x000055af28804bb8 0x000055af2880452a 0x000055af2833601c 0x000055af2833410c 0x000055af276ba1a1 0x000055af2768a9e3 0x000055af28431415 0x000055af2843e3dc 0x000055af287fff4e 0x000055af287fb72e 0x000055af2bf869f9 0x000055af2bfa36d6 0x000055af31f89fc7 0x000055af31f8a57e 0x000055af31f297bf 0x000055af31f26dcf 0x00007f33faec1ac3 0x00007f33faf53850
clickhouse-server.err.log:2025.08.18 20:46:02.681147 [ 1638 ] {} <Fatal> BaseDaemon: ########################################
clickhouse-server.err.log:2025.08.18 20:46:02.681187 [ 1638 ] {} <Fatal> BaseDaemon: (version 25.8.1.1, build id: 225E305DEB4FDD6AD34E048240C045562E19D515, git hash: 6b9fc46fb13dfc0c7cfa5eda649010ff65d617f6) (from thread 711559) (query_id: 8d4f2e5a-5031-40da-a4d3-ffc4cc3d2714) (query: /* ddl_entry=query-0000000006 */ CREATE OR REPLACE TABLE test_1vpvh5ec.table_s3Cluster UUID 'ec2ee3ea-da4f-4f32-aed2-f33df2fa1b49' (`x` UInt32, `y` UInt32, `z` UInt32) ENGINE = MergeTree ORDER BY x SETTINGS index_granularity = 48566, min_bytes_for_wide_part = 0, ratio_of_defaults_for_sparse_serialization = 1., replace_long_file_name_to_hash = false, max_file_name_length = 128, min_bytes_for_full_part_storage = 536870912, compact_parts_max_bytes_to_buffer = 420934667, compact_parts_max_granules_to_buffer = 1, compact_parts_merge_max_bytes_to_prefetch_part = 26088009, merge_max_block_size = 8392, old_parts_lifetime = 480., prefer_fetch_merged_part_size_threshold = 1, vertical_merge_algorithm_min_rows_to_activate = 896920, vertical_merge_algorithm_min_columns_to_activate = 37, min_merge_bytes_to_use_direct_io = 2651127205, index_granularity_bytes = 17206790, use_const_adaptive_granularity = false, enable_index_granularity_compression = false, concurrent_part_removal_threshold = 80, allow_vertical_merges_from_compact_to_wide_parts = true, enable_block_number_column = true, enable_block_offset_column = false, cache_populated_by_fetch = false, marks_compress_block_size = 82036, primary_key_compress_block_size = 34614, use_primary_key_cache = false, prewarm_primary_key_cache = false, prewarm_mark_cache = false AS SELECT * FROM s3Cluster('test_cluster_one_shard_three_replicas_localhost', 'http://localhost:11111/test/test_1vpvh5ec/03579.tsv', 'TSV') LIMIT 100) Received signal Aborted (6)
clickhouse-server.err.log:2025.08.18 20:46:02.681245 [ 1638 ] {} <Fatal> BaseDaemon: 
clickhouse-server.err.log:2025.08.18 20:46:02.681263 [ 1638 ] {} <Fatal> BaseDaemon: Stack trace: 0x00007f33faec39fd 0x00007f33fae6f476 0x00007f33fae557f3 0x000055af223b7153 0x000055af223b79ac 0x000055af223b7c6c 0x000055af1b9307ce 0x000055af1b930100 0x000055af1b92fd2b 0x000055af28285d65 0x000055af26a94716 0x000055af2c5a9b4f 0x000055af2c5a8e21 0x000055af2c4da0cc 0x000055af2c50b384 0x000055af284e0697 0x000055af2846e6c1 0x000055af28473180 0x000055af2843bdbd 0x000055af28434bd8 0x000055af2843152b 0x000055af2843e3dc 0x000055af287fff4e 0x000055af28804bb8 0x000055af2880452a 0x000055af2833601c 0x000055af2833410c 0x000055af276ba1a1 0x000055af2768a9e3 0x000055af28431415 0x000055af2843e3dc 0x000055af287fff4e 0x000055af287fb72e 0x000055af2bf869f9 0x000055af2bfa36d6 0x000055af31f89fc7 0x000055af31f8a57e 0x000055af31f297bf 0x000055af31f26dcf 0x00007f33faec1ac3 0x00007f33faf53850
clickhouse-server.err.log:2025.08.18 20:46:02.681353 [ 1638 ] {} <Fatal> BaseDaemon: 3. ? @ 0x00000000000969fd
clickhouse-server.err.log:2025.08.18 20:46:02.681387 [ 1638 ] {} <Fatal> BaseDaemon: 4. ? @ 0x0000000000042476
clickhouse-server.err.log:2025.08.18 20:46:02.681415 [ 1638 ] {} <Fatal> BaseDaemon: 5. ? @ 0x00000000000287f3
clickhouse-server.err.log:2025.08.18 20:46:02.689621 [ 1638 ] {} <Fatal> BaseDaemon: 6. ./ci/tmp/build/./src/Common/Exception.cpp:51: DB::abortOnFailedAssertion(String const&, void* const*, unsigned long, unsigned long) @ 0x000000000f99a153
clickhouse-server.err.log:2025.08.18 20:46:02.722317 [ 1638 ] {} <Fatal> BaseDaemon: 7. ./ci/tmp/build/./src/Common/Exception.cpp:84: DB::handle_error_code(String const&, std::basic_string_view<char, std::char_traits<char>>, int, bool, std::vector<void*, std::allocator<void*>> const&) @ 0x000000000f99a9ac
clickhouse-server.err.log:2025.08.18 20:46:02.749724 [ 1638 ] {} <Fatal> BaseDaemon: 8. ./ci/tmp/build/./src/Common/Exception.cpp:135: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000f99ac6c
clickhouse-server.err.log:2025.08.18 20:46:02.809259 [ 1638 ] {} <Fatal> BaseDaemon: 9. DB::Exception::Exception(String&&, int, String, bool) @ 0x0000000008f137ce
clickhouse-server.err.log:2025.08.18 20:46:02.846225 [ 1638 ] {} <Fatal> BaseDaemon: 10. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x0000000008f13100
clickhouse-server.err.log:2025.08.18 20:46:02.878906 [ 1638 ] {} <Fatal> BaseDaemon: 11. DB::Exception::Exception<String>(int, FormatStringHelperImpl<std::type_identity<String>::type>, String&&) @ 0x0000000008f12d2b
clickhouse-server.err.log:2025.08.18 20:46:02.985795 [ 1638 ] {} <Fatal> BaseDaemon: 12. ./ci/tmp/build/./src/Interpreters/Context.cpp:6307: DB::Context::getClusterFunctionReadTaskCallback() const @ 0x0000000015868d65
clickhouse-server.err.log:2025.08.18 20:46:03.062232 [ 1638 ] {} <Fatal> BaseDaemon: 13. ./ci/tmp/build/./src/Storages/ObjectStorage/StorageObjectStorageSource.cpp:154: DB::StorageObjectStorageSource::createFileIterator(std::shared_ptr<DB::StorageObjectStorageConfiguration>, DB::StorageObjectStorageQuerySettings const&, std::shared_ptr<DB::IObjectStorage>, bool, std::shared_ptr<DB::Context const> const&, DB::ActionsDAG::Node const*, DB::ActionsDAG const*, DB::NamesAndTypesList const&, DB::NamesAndTypesList const&, std::vector<std::shared_ptr<DB::RelativePathWithMetadata>, std::allocator<std::shared_ptr<DB::RelativePathWithMetadata>>>*, std::function<void (DB::FileProgress)>, bool, bool) @ 0x0000000014077716
clickhouse-server.err.log:2025.08.18 20:46:03.113014 [ 1638 ] {} <Fatal> BaseDaemon: 14. ./ci/tmp/build/./src/Processors/QueryPlan/ReadFromObjectStorageStep.cpp:119: DB::ReadFromObjectStorageStep::createIterator() @ 0x0000000019b8cb4f
clickhouse-server.err.log:2025.08.18 20:46:03.136639 [ 1638 ] {} <Fatal> BaseDaemon: 15. ./ci/tmp/build/./src/Processors/QueryPlan/ReadFromObjectStorageStep.cpp:63: DB::ReadFromObjectStorageStep::initializePipeline(DB::QueryPipelineBuilder&, DB::BuildQueryPipelineSettings const&) @ 0x0000000019b8be21
clickhouse-server.err.log:2025.08.18 20:46:03.146304 [ 1638 ] {} <Fatal> BaseDaemon: 16. ./ci/tmp/build/./src/Processors/QueryPlan/ISourceStep.cpp:20: DB::ISourceStep::updatePipeline(std::vector<std::unique_ptr<DB::QueryPipelineBuilder, std::default_delete<DB::QueryPipelineBuilder>>, std::allocator<std::unique_ptr<DB::QueryPipelineBuilder, std::default_delete<DB::QueryPipelineBuilder>>>>, DB::BuildQueryPipelineSettings const&) @ 0x0000000019abd0cc
clickhouse-server.err.log:2025.08.18 20:46:03.180051 [ 1638 ] {} <Fatal> BaseDaemon: 17. ./ci/tmp/build/./src/Processors/QueryPlan/QueryPlan.cpp:202: DB::QueryPlan::buildQueryPipeline(DB::QueryPlanOptimizationSettings const&, DB::BuildQueryPipelineSettings const&, bool) @ 0x0000000019aee384
clickhouse-server.err.log:2025.08.18 20:46:03.194704 [ 1638 ] {} <Fatal> BaseDaemon: 18. ./ci/tmp/build/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:289: DB::InterpreterSelectQueryAnalyzer::buildQueryPipeline() @ 0x0000000015ac3697
clickhouse-server.err.log:2025.08.18 20:46:03.233215 [ 1638 ] {} <Fatal> BaseDaemon: 19. ./ci/tmp/build/./src/Interpreters/InterpreterInsertQuery.cpp:576: DB::InterpreterInsertQuery::buildInsertSelectPipeline(DB::ASTInsertQuery&, std::shared_ptr<DB::IStorage>) @ 0x0000000015a516c1
clickhouse-server.err.log:2025.08.18 20:46:03.266601 [ 1638 ] {} <Fatal> BaseDaemon: 20. ./ci/tmp/build/./src/Interpreters/InterpreterInsertQuery.cpp:901: DB::InterpreterInsertQuery::execute() @ 0x0000000015a56180
clickhouse-server.err.log:2025.08.18 20:46:03.325240 [ 1638 ] {} <Fatal> BaseDaemon: 21. ./ci/tmp/build/./src/Interpreters/InterpreterCreateQuery.cpp:2238: DB::InterpreterCreateQuery::fillTableIfNeeded(DB::ASTCreateQuery const&) @ 0x0000000015a1edbd
clickhouse-server.err.log:2025.08.18 20:46:03.380715 [ 1638 ] {} <Fatal> BaseDaemon: 22. ./ci/tmp/build/./src/Interpreters/InterpreterCreateQuery.cpp:2146: DB::InterpreterCreateQuery::doCreateOrReplaceTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&, DB::LoadingStrictnessLevel) @ 0x0000000015a17bd8
clickhouse-server.err.log:2025.08.18 20:46:03.455046 [ 1638 ] {} <Fatal> BaseDaemon: 23. ./ci/tmp/build/./src/Interpreters/InterpreterCreateQuery.cpp:1747: DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x0000000015a1452b
clickhouse-server.err.log:2025.08.18 20:46:03.517286 [ 1638 ] {} <Fatal> BaseDaemon: 24. ./ci/tmp/build/./src/Interpreters/InterpreterCreateQuery.cpp:2364: DB::InterpreterCreateQuery::execute() @ 0x0000000015a213dc
clickhouse-server.err.log:2025.08.18 20:46:03.562214 [ 1638 ] {} <Fatal> BaseDaemon: 25. ./ci/tmp/build/./src/Interpreters/executeQuery.cpp:1561: DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum, std::unique_ptr<DB::ReadBuffer, std::default_delete<DB::ReadBuffer>>&, std::shared_ptr<DB::IAST>&, std::shared_ptr<DB::ImplicitTransactionControlExecutor>) @ 0x0000000015de2f4e
clickhouse-server.err.log:2025.08.18 20:46:03.601058 [ 1638 ] {} <Fatal> BaseDaemon: 26. ./ci/tmp/build/./src/Interpreters/executeQuery.cpp:1928: DB::executeQuery(std::unique_ptr<DB::ReadBuffer, std::default_delete<DB::ReadBuffer>>, DB::WriteBuffer&, bool, std::shared_ptr<DB::Context>, std::function<void (DB::QueryResultDetails const&)>, DB::QueryFlags, std::optional<DB::FormatSettings> const&, std::function<void (DB::IOutputFormat&, String const&, std::shared_ptr<DB::Context const> const&, std::optional<DB::FormatSettings> const&)>, std::function<void ()>) @ 0x0000000015de7bb8
clickhouse-server.err.log:2025.08.18 20:46:03.659856 [ 1638 ] {} <Fatal> BaseDaemon: 27. ./ci/tmp/build/./src/Interpreters/executeQuery.cpp:1795: DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::shared_ptr<DB::Context>, std::function<void (DB::QueryResultDetails const&)>, DB::QueryFlags, std::optional<DB::FormatSettings> const&, std::function<void (DB::IOutputFormat&, String const&, std::shared_ptr<DB::Context const> const&, std::optional<DB::FormatSettings> const&)>, std::function<void ()>) @ 0x0000000015de752a
clickhouse-server.err.log:2025.08.18 20:46:03.687184 [ 1638 ] {} <Fatal> BaseDaemon: 28. ./ci/tmp/build/./src/Interpreters/DDLWorker.cpp:514: DB::DDLWorker::tryExecuteQuery(DB::DDLTaskBase&, std::shared_ptr<zkutil::ZooKeeper> const&, bool) @ 0x000000001591901c
clickhouse-server.err.log:2025.08.18 20:46:03.712757 [ 1638 ] {} <Fatal> BaseDaemon: 29. ./ci/tmp/build/./src/Interpreters/DDLWorker.cpp:679: DB::DDLWorker::processTask(DB::DDLTaskBase&, std::shared_ptr<zkutil::ZooKeeper> const&, bool) @ 0x000000001591710c
clickhouse-server.err.log:2025.08.18 20:46:03.737480 [ 1638 ] {} <Fatal> BaseDaemon: 30. ./ci/tmp/build/./src/Databases/DatabaseReplicatedWorker.cpp:466: DB::DatabaseReplicatedDDLWorker::tryEnqueueAndExecuteEntry(DB::DDLLogEntry&, std::shared_ptr<DB::Context const>, bool) @ 0x0000000014c9d1a1
clickhouse-server.err.log:2025.08.18 20:46:03.800483 [ 1638 ] {} <Fatal> BaseDaemon: 31. ./ci/tmp/build/./src/Databases/DatabaseReplicated.cpp:1238: DB::DatabaseReplicated::tryEnqueueReplicatedDDL(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context const>, DB::QueryFlags) @ 0x0000000014c6d9e3
clickhouse-server.err.log:2025.08.18 20:46:03.863761 [ 1638 ] {} <Fatal> BaseDaemon: 32. ./ci/tmp/build/./src/Interpreters/InterpreterCreateQuery.cpp:1731: DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x0000000015a14415
clickhouse-server.err.log:2025.08.18 20:46:03.928599 [ 1638 ] {} <Fatal> BaseDaemon: 33. ./ci/tmp/build/./src/Interpreters/InterpreterCreateQuery.cpp:2364: DB::InterpreterCreateQuery::execute() @ 0x0000000015a213dc
clickhouse-server.err.log:2025.08.18 20:46:03.966810 [ 1638 ] {} <Fatal> BaseDaemon: 34. ./ci/tmp/build/./src/Interpreters/executeQuery.cpp:1561: DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum, std::unique_ptr<DB::ReadBuffer, std::default_delete<DB::ReadBuffer>>&, std::shared_ptr<DB::IAST>&, std::shared_ptr<DB::ImplicitTransactionControlExecutor>) @ 0x0000000015de2f4e
clickhouse-server.err.log:2025.08.18 20:46:04.007028 [ 1638 ] {} <Fatal> BaseDaemon: 35. ./ci/tmp/build/./src/Interpreters/executeQuery.cpp:1770: DB::executeQuery(String const&, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000015dde72e
clickhouse-server.err.log:2025.08.18 20:46:04.047041 [ 1638 ] {} <Fatal> BaseDaemon: 36. ./ci/tmp/build/./src/Server/TCPHandler.cpp:742: DB::TCPHandler::runImpl() @ 0x00000000195699f9
clickhouse-server.err.log:2025.08.18 20:46:04.113477 [ 1638 ] {} <Fatal> BaseDaemon: 37. ./ci/tmp/build/./src/Server/TCPHandler.cpp:2743: DB::TCPHandler::run() @ 0x00000000195866d6
clickhouse-server.err.log:2025.08.18 20:46:04.115734 [ 1638 ] {} <Fatal> BaseDaemon: 38. ./ci/tmp/build/./base/poco/Net/src/TCPServerConnection.cpp:40: Poco::Net::TCPServerConnection::start() @ 0x000000001f56cfc7
clickhouse-server.err.log:2025.08.18 20:46:04.118546 [ 1638 ] {} <Fatal> BaseDaemon: 39. ./ci/tmp/build/./base/poco/Net/src/TCPServerDispatcher.cpp:115: Poco::Net::TCPServerDispatcher::run() @ 0x000000001f56d57e
clickhouse-server.err.log:2025.08.18 20:46:04.132309 [ 1638 ] {} <Fatal> BaseDaemon: 40. ./ci/tmp/build/./base/poco/Foundation/src/ThreadPool.cpp:205: Poco::PooledThread::run() @ 0x000000001f50c7bf
clickhouse-server.err.log:2025.08.18 20:46:04.135059 [ 1638 ] {} <Fatal> BaseDaemon: 41. ./base/poco/Foundation/src/Thread_POSIX.cpp:341: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001f509dcf
clickhouse-server.err.log:2025.08.18 20:46:04.162954 [ 1638 ] {} <Fatal> BaseDaemon: 42. ? @ 0x0000000000094ac3
clickhouse-server.err.log:2025.08.18 20:46:04.162982 [ 1638 ] {} <Fatal> BaseDaemon: 43. ? @ 0x0000000000126850
clickhouse-server.err.log:2025.08.18 20:46:04.163001 [ 1638 ] {} <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.
clickhouse-server.err.log:2025.08.18 20:46:06.214842 [ 1638 ] {} <Fatal> BaseDaemon: Changed settings: min_compress_block_size = 1981503, max_compress_block_size = 1073309, max_block_size = 27437, min_external_table_block_size_bytes = 100000000, max_joined_block_size_rows = 8317, max_insert_threads = 3, max_threads = 1, max_read_buffer_size = 953720, connect_timeout_with_failover_ms = 2000, connect_timeout_with_failover_secure_ms = 3000, idle_connection_timeout = 36000, s3_max_get_rps = 1000000, s3_max_get_burst = 2000000, s3_max_put_rps = 1000000, s3_max_put_burst = 2000000, s3_check_objects_after_upload = true, max_remote_read_network_bandwidth = 1000000000000, max_remote_write_network_bandwidth = 1000000000000, max_local_read_bandwidth = 1000000000000, max_local_write_bandwidth = 1000000000000, stream_like_engine_allow_direct_select = true, replication_wait_for_inactive_replica_timeout = 30, min_count_to_compile_aggregate_expression = 0, group_by_two_level_threshold = 1000000, group_by_two_level_threshold_bytes = 18598102, allow_nonconst_timezone_arguments = true, min_chunk_bytes_for_parallel_parsing = 17520091, merge_tree_coarse_index_granularity = 31, min_bytes_to_use_direct_io = 6927475006, min_bytes_to_use_mmap_io = 10737418240, use_skip_indexes_if_final = false, use_skip_indexes_if_final_exact_mode = false, log_queries = true, insert_quorum_timeout = 60000, merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability = 0.10000000149011612, http_response_buffer_size = 9396844, fsync_metadata = false, query_plan_join_swap_table = false, distributed_ddl_task_timeout = 120, http_send_timeout = 60., http_receive_timeout = 60., use_index_for_in_with_subqueries_max_values = 1000000000, opentelemetry_start_trace_probability = 0.10000000149011612, max_rows_to_read = 20000000, max_bytes_to_read = 1000000000000, max_bytes_to_read_leaf = 1000000000000, max_rows_to_group_by = 10000000000, max_bytes_before_external_group_by = 10737418240, max_bytes_ratio_before_external_group_by = 0., max_rows_to_sort = 10000000000, max_bytes_to_sort = 10000000000, prefer_external_sort_block_bytes = 0, max_bytes_before_external_sort = 9192584618, max_bytes_ratio_before_external_sort = 0., max_bytes_before_remerge_sort = 1806424668, max_result_rows = 1000000000, max_result_bytes = 1000000000, max_execution_speed = 100000000000, max_execution_speed_bytes = 10000000000000, timeout_before_checking_execution_speed = 300., max_estimated_execution_time = 600., max_columns_to_read = 20000, max_temporary_columns = 20000, max_temporary_non_const_columns = 20000, max_rows_in_set = 10000000000, max_bytes_in_set = 10000000000, max_rows_in_join = 10000000000, max_bytes_in_join = 10000000000, cross_join_min_rows_to_compress = 100000000, cross_join_min_bytes_to_compress = 1, max_rows_to_transfer = 1000000000, max_bytes_to_transfer = 1000000000, max_rows_in_distinct = 10000000000, max_bytes_in_distinct = 10000000000, max_memory_usage = 5000000000, max_memory_usage_for_user = 32000000000, max_untracked_memory = 1048576, memory_profiler_step = 1048576, max_network_bandwidth = 100000000000, max_network_bytes = 1000000000000, max_network_bandwidth_for_user = 100000000000, max_network_bandwidth_for_all_users = 100000000000, max_temporary_data_on_disk_size_for_user = 100000000000, max_temporary_data_on_disk_size_for_query = 100000000000, max_backup_bandwidth = 100000000000, log_comment = '03579_create_table_populate_from_s3.sh', send_logs_level = 'warning', optimize_aggregation_in_order = true, aggregation_in_order_max_block_bytes = 38986833, read_in_order_two_level_merge_threshold = 91, max_hyperscan_regexp_length = 1000000, max_hyperscan_regexp_total_length = 10000000, allow_introspection_functions = true, database_atomic_wait_for_drop_and_detach_synchronously = true, optimize_append_index = true, lock_acquire_timeout = 60., query_cache_max_size_in_bytes = 10000000, query_cache_max_entries = 100000, database_replicated_initial_query_timeout_sec = 120, database_replicated_enforce_synchronous_settings = true, database_replicated_always_detach_permanently = true, database_replicated_allow_replicated_engine_arguments = 3, distributed_ddl_output_mode = 'none', distributed_ddl_entry_format_version = 6, external_storage_max_read_rows = 10000000000, external_storage_max_read_bytes = 10000000000, local_filesystem_read_method = 'read', remote_filesystem_read_method = 'read', local_filesystem_read_prefetch = true, remote_filesystem_read_prefetch = false, merge_tree_min_bytes_per_task_for_remote_reading = 8388608, merge_tree_compact_parts_min_granules_to_multibuffer_read = 37, async_insert_busy_timeout_max_ms = 5000, enable_filesyste

Looks related.

@thevar1able

This comment was marked as outdated.

@thevar1able
Copy link
Copy Markdown
Member Author

https://github.com/clickhouse/clickhouse/blob/ae2c7c64b261861e5a8f9bceaad50713ff9fc957/src/Interpreters/DDLTask.cpp#L256
https://github.com/clickhouse/clickhouse/blob/ae2c7c64b261861e5a8f9bceaad50713ff9fc957/src/TableFunctions/TableFunctionObjectStorageCluster.cpp#L50

DDL queries are executed as secondary queries, and StorageObjectStorageCluster always sets distributed_processing to true in case of secondary query. This isn't even related to auto cluster functions.

2025.08.19 20:59:20.328114 [ 161003 ] {297f577c-116c-48d0-b4db-4e5eb771f038} <Debug> uwu: Do we actually have preconditions? false, false

LOG_DEBUG(&Poco::Logger::get("uwu"), "Do we actually have preconditions? {}, {}", client_info.collaborate_with_initiator, context->hasClusterFunctionReadTaskCallback());

@thevar1able
Copy link
Copy Markdown
Member Author

#85904

@thevar1able
Copy link
Copy Markdown
Member Author

@nikitamikhaylov FYI, I'll finish backports for this PR, then complete backports for #85904, all in one go, so that release branches don't end up with a broken test.

@robot-ch-test-poll4 robot-ch-test-poll4 added the pr-backports-created Backport PRs are successfully created, it won't be processed by CI script anymore label Aug 20, 2025
@robot-ch-test-poll1 robot-ch-test-poll1 removed the pr-backports-created Backport PRs are successfully created, it won't be processed by CI script anymore label Aug 20, 2025
robot-ch-test-poll added a commit that referenced this pull request Aug 25, 2025
Cherry pick #85734 to 25.3: Fix incorrect distinguishment between `*Cluster` and `Distributed` for object storage
robot-clickhouse added a commit that referenced this pull request Aug 25, 2025
…ter` and `Distributed` for object storage
robot-ch-test-poll added a commit that referenced this pull request Aug 25, 2025
Cherry pick #85734 to 25.5: Fix incorrect distinguishment between `*Cluster` and `Distributed` for object storage
robot-clickhouse added a commit that referenced this pull request Aug 25, 2025
…ter` and `Distributed` for object storage
thevar1able added a commit that referenced this pull request Aug 25, 2025
Cherry pick #85734 to 25.6: Fix incorrect distinguishment between `*Cluster` and `Distributed` for object storage
thevar1able added a commit that referenced this pull request Aug 25, 2025
Cherry pick #85734 to 25.7: Fix incorrect distinguishment between `*Cluster` and `Distributed` for object storage
robot-clickhouse added a commit that referenced this pull request Aug 25, 2025
…ter` and `Distributed` for object storage
robot-clickhouse added a commit that referenced this pull request Aug 25, 2025
…ter` and `Distributed` for object storage
@robot-clickhouse-ci-1 robot-clickhouse-ci-1 added the pr-backports-created Backport PRs are successfully created, it won't be processed by CI script anymore label Aug 25, 2025
clickhouse-gh bot added a commit that referenced this pull request Aug 27, 2025
Backport #85734 to 25.6: Fix incorrect distinguishment between `*Cluster` and `Distributed` for object storage
clickhouse-gh bot added a commit that referenced this pull request Aug 27, 2025
Backport #85734 to 25.7: Fix incorrect distinguishment between `*Cluster` and `Distributed` for object storage
thevar1able added a commit that referenced this pull request Aug 27, 2025
Backport #85734 to 25.3: Fix incorrect distinguishment between `*Cluster` and `Distributed` for object storage
thevar1able added a commit that referenced this pull request Aug 27, 2025
Backport #85734 to 25.5: Fix incorrect distinguishment between `*Cluster` and `Distributed` for object storage
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

pr-backports-created Backport PRs are successfully created, it won't be processed by CI script anymore pr-backports-created-cloud deprecated label, NOOP pr-critical-bugfix pr-must-backport Pull request should be backported intentionally. Use this label with great care! pr-must-backport-synced The `*-must-backport` labels are synced into the cloud Sync PR pr-synced-to-cloud The PR is synced to the cloud repo

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants