-
Notifications
You must be signed in to change notification settings - Fork 8.3k
Crash when selecting from iceberg table with iceberg table function with partition pruning #80379
Copy link
Copy link
Closed
Labels
comp-datalakeData lake table formats (Iceberg/Delta/Hudi) integration.Data lake table formats (Iceberg/Delta/Hudi) integration.crashCrash / segfault / abortCrash / segfault / abort
Description
Does it reproduce on the most recent release?
Yes
How to reproduce
I used 25.4
I used pyiceberg to create a table with following schema:
schema = Schema(
NestedField(
field_id=1, name="name", field_type=StringType(), required=False
),
NestedField(
field_id=2, name="double", field_type=DoubleType(), required=False
),
NestedField(
field_id=3, name="integer", field_type=LongType(), required=False
),
)
partition_spec = PartitionSpec(
PartitionField(
source_id=1,
field_id=1001,
transform=BucketTransform(num_buckets=4),
name="symbol_partition",
),
PartitionField(
source_id=2,
field_id=1002,
transform=IdentityTransform(),
name="double_partition",
),
PartitionField(
source_id=3,
field_id=1003,
transform=IdentityTransform(),
name="integer_partition",
),
)SELECT * FROM iceberg('http://minio:9000/warehouse/data', 'user', 'password') FORMAT Values('name_3',3.5,3),('name_4',4.5,4),('name_9',9.5,9),('name_1',1.5,1),('name_6',6.5,6),('name_2',2.5,2),('name_0',0.5,0),('name_5',5.5,5),('name_7',7.5,7),('name_8',8.5,8)
SELECT *
FROM iceberg('http://minio:9000/warehouse/data', 'user', 'password')
WHERE integer > 9
SETTINGS use_iceberg_partition_pruning = '1'[clickhouse1] 2025.05.16 23:29:17.762359 [ 820 ] <Fatal> BaseDaemon: ########################################
[clickhouse1] 2025.05.16 23:29:17.762410 [ 820 ] <Fatal> BaseDaemon: (version 25.4.4.25 (official build), build id: AA037DFCBC079A925A7439B3015DBB579ECD6067, git hash: c97f6ffeac69d2fd590f0be42e3a1961abc20c04) (from thread 30) (query_id: 86df45aa-47b4-426e-b2d6-1c11db29b7e8) (query: SELECT * FROM iceberg('http://minio:9000/warehouse/data', 'admin', '[HIDDEN]') WHERE integer > 9 SETTINGS use_iceberg_partition_pruning = '1') Received signal Segmentation fault (11)
[clickhouse1] 2025.05.16 23:29:17.762454 [ 820 ] <Fatal> BaseDaemon: Address: 0x7b0000603360. Access: read. Address not mapped to object.
[clickhouse1] 2025.05.16 23:29:17.762476 [ 820 ] <Fatal> BaseDaemon: Stack trace: 0x000000000f68c837 0x00007bd57ef94520 0x000000001418bfd8 0x0000000011cfdb4e 0x0000000011cec1fa 0x0000000011121412 0x0000000011c4ba14 0x0000000011bf8d8e 0x0000000011bf8b71 0x0000000014f5179d 0x0000000014f5001f 0x0000000014e779bd 0x0000000013240d36 0x000000001324065e 0x000000001359b068 0x0000000013594766 0x00000000148f5afe 0x00000000149147f9 0x000000001801bdc7 0x000000001801c219 0x0000000017fe753b 0x0000000017fe5a1d 0x00007bd57efe6ac3 0x00007bd57f078850
[clickhouse1] 2025.05.16 23:29:17.762538 [ 820 ] <Fatal> BaseDaemon: 0. signalHandler(int, siginfo_t*, void*) @ 0x000000000f68c837
[clickhouse1] 2025.05.16 23:29:17.762565 [ 820 ] <Fatal> BaseDaemon: 1. ? @ 0x00007bd57ef94520
[clickhouse1] 2025.05.16 23:29:17.762600 [ 820 ] <Fatal> BaseDaemon: 2. DB::KeyCondition::checkInRange(unsigned long, DB::FieldRef const*, DB::FieldRef const*, std::vector<std::shared_ptr<DB::IDataType const>, std::allocator<std::shared_ptr<DB::IDataType const>>> const&, BoolMask) const @ 0x000000001418bfd8
[clickhouse1] 2025.05.16 23:29:17.762633 [ 820 ] <Fatal> BaseDaemon: 3. Iceberg::ManifestFilesPruner::canBePruned(Iceberg::ManifestFileEntry const&) const @ 0x0000000011cfdb4e
[clickhouse1] 2025.05.16 23:29:17.762663 [ 820 ] <Fatal> BaseDaemon: 4. DB::IcebergMetadata::iterate(DB::ActionsDAG const*, std::function<void (DB::FileProgress)>, unsigned long) const @ 0x0000000011cec1fa
[clickhouse1] 2025.05.16 23:29:17.762702 [ 820 ] <Fatal> BaseDaemon: 5. DB::DataLakeConfiguration<DB::StorageS3Configuration, DB::IcebergMetadata>::iterate(DB::ActionsDAG const*, std::function<void (DB::FileProgress)>, unsigned long) @ 0x0000000011121412
[clickhouse1] 2025.05.16 23:29:17.762736 [ 820 ] <Fatal> BaseDaemon: 6. DB::StorageObjectStorageSource::createFileIterator(std::shared_ptr<DB::StorageObjectStorage::Configuration>, DB::StorageObjectStorage::QuerySettings const&, std::shared_ptr<DB::IObjectStorage>, bool, std::shared_ptr<DB::Context const> const&, DB::ActionsDAG::Node const*, std::optional<DB::ActionsDAG> const&, DB::NamesAndTypesList const&, std::vector<std::shared_ptr<DB::RelativePathWithMetadata>, std::allocator<std::shared_ptr<DB::RelativePathWithMetadata>>>*, std::function<void (DB::FileProgress)>, bool, bool) @ 0x0000000011c4ba14
[clickhouse1] 2025.05.16 23:29:17.762763 [ 820 ] <Fatal> BaseDaemon: 7. DB::(anonymous namespace)::ReadFromObjectStorageStep::createIterator() @ 0x0000000011bf8d8e
[clickhouse1] 2025.05.16 23:29:17.762794 [ 820 ] <Fatal> BaseDaemon: 8. DB::(anonymous namespace)::ReadFromObjectStorageStep::applyFilters(DB::ActionDAGNodes) (.1cdeddcced68b1c658286bd6b920a47e) @ 0x0000000011bf8b71
[clickhouse1] 2025.05.16 23:29:17.762829 [ 820 ] <Fatal> BaseDaemon: 9. DB::QueryPlanOptimizations::optimizePrimaryKeyConditionAndLimit(std::vector<DB::QueryPlanOptimizations::Frame, std::allocator<DB::QueryPlanOptimizations::Frame>> const&) @ 0x0000000014f5179d
[clickhouse1] 2025.05.16 23:29:17.762864 [ 820 ] <Fatal> BaseDaemon: 10. DB::QueryPlanOptimizations::optimizeTreeSecondPass(DB::QueryPlanOptimizationSettings const&, DB::QueryPlan::Node&, std::list<DB::QueryPlan::Node, std::allocator<DB::QueryPlan::Node>>&) @ 0x0000000014f5001f
[clickhouse1] 2025.05.16 23:29:17.762984 [ 820 ] <Fatal> BaseDaemon: 11. DB::QueryPlan::buildQueryPipeline(DB::QueryPlanOptimizationSettings const&, DB::BuildQueryPipelineSettings const&, bool) @ 0x0000000014e779bd
[clickhouse1] 2025.05.16 23:29:17.763014 [ 820 ] <Fatal> BaseDaemon: 12. DB::InterpreterSelectQueryAnalyzer::buildQueryPipeline() @ 0x0000000013240d36
[clickhouse1] 2025.05.16 23:29:17.763036 [ 820 ] <Fatal> BaseDaemon: 13. DB::InterpreterSelectQueryAnalyzer::execute() @ 0x000000001324065e
[clickhouse1] 2025.05.16 23:29:17.763061 [ 820 ] <Fatal> BaseDaemon: 14. DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*, std::shared_ptr<DB::IAST>&) @ 0x000000001359b068
[clickhouse1] 2025.05.16 23:29:17.763083 [ 820 ] <Fatal> BaseDaemon: 15. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x0000000013594766
[clickhouse1] 2025.05.16 23:29:17.763113 [ 820 ] <Fatal> BaseDaemon: 16. DB::TCPHandler::runImpl() @ 0x00000000148f5afe
[clickhouse1] 2025.05.16 23:29:17.763135 [ 820 ] <Fatal> BaseDaemon: 17. DB::TCPHandler::run() @ 0x00000000149147f9
[clickhouse1] 2025.05.16 23:29:17.763158 [ 820 ] <Fatal> BaseDaemon: 18. Poco::Net::TCPServerConnection::start() @ 0x000000001801bdc7
[clickhouse1] 2025.05.16 23:29:17.763174 [ 820 ] <Fatal> BaseDaemon: 19. Poco::Net::TCPServerDispatcher::run() @ 0x000000001801c219
[clickhouse1] 2025.05.16 23:29:17.763192 [ 820 ] <Fatal> BaseDaemon: 20. Poco::PooledThread::run() @ 0x0000000017fe753b
[clickhouse1] 2025.05.16 23:29:17.763215 [ 820 ] <Fatal> BaseDaemon: 21. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000017fe5a1d
[clickhouse1] 2025.05.16 23:29:17.763227 [ 820 ] <Fatal> BaseDaemon: 22. ? @ 0x00007bd57efe6ac3
[clickhouse1] 2025.05.16 23:29:17.763244 [ 820 ] <Fatal> BaseDaemon: 23. ? @ 0x00007bd57f078850
[clickhouse1] 2025.05.16 23:29:17.901091 [ 820 ] <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: EF16EA7EACF9D6FAF15665BF18207F6A)
[clickhouse1] 2025.05.16 23:29:17.901196 [ 820 ] <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues
[clickhouse1] 2025.05.16 23:29:17.901318 [ 820 ] <Fatal> BaseDaemon: Changed settings: use_uncompressed_cache = false, load_balancing = 'random', max_memory_usage = 10000000000, use_iceberg_partition_pruning = true
Error on processing query: Code: 32. DB::Exception: Attempt to read after eof: while receiving packet from localhost:9000. (ATTEMPT_TO_READ_AFTER_EOF) (version 25.4.4.25 (official build))
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
comp-datalakeData lake table formats (Iceberg/Delta/Hudi) integration.Data lake table formats (Iceberg/Delta/Hudi) integration.crashCrash / segfault / abortCrash / segfault / abort