Skip to content

Stress tests: exit 1 in run_with_retry exits from the whole check #56798

@tavplubix

Description

@tavplubix

https://s3.amazonaws.com/clickhouse-test-reports/56693/8f863a338f00d28c1dc242f6231fab7288377e87/stress_test__msan_.html

Server fails to start after stress tests due to

2023.11.15 03:45:51.651289 [ 36409 ] {} <Error> Application: Caught exception while loading metadata: Code: 36. DB::Exception: Expected cache path (`path`) in configuration: While processing disk(type = cache, disk = 'local_disk', name = '$CLICHOUSE_TEST_UNIQUE_NAME', cache_name = 'cache_collection_sql'): Cannot attach table `test_1`.`test` from metadata file /var/lib/clickhouse/store/422/4221c4bb-bc7b-4acc-ab11-f869a9eb4e27/test.sql from query ATTACH TABLE test_1.test UUID 'dc31a6ec-a686-4ea3-ad6c-dee95428e645' (`a` Int32, `b` String) ENGINE = MergeTree ORDER BY a SETTINGS disk = disk(type = cache, disk = 'local_disk', name = '$CLICHOUSE_TEST_UNIQUE_NAME', cache_name = 'cache_collection_sql'), index_granularity = 52318, min_bytes_for_wide_part = 494146144, ratio_of_defaults_for_sparse_serialization = 1., replace_long_file_name_to_hash = 1, max_file_name_length = 112, merge_max_block_size = 21108, prefer_fetch_merged_part_size_threshold = 10737418240, vertical_merge_algorithm_min_rows_to_activate = 1, vertical_merge_algorithm_min_columns_to_activate = 49, min_merge_bytes_to_use_direct_io = 1, index_granularity_bytes = 3308535, allow_vertical_merges_from_compact_to_wide_parts = 1, marks_compress_block_size = 78680, primary_key_compress_block_size = 62883. (BAD_ARGUMENTS), Stack trace (when copying this message, always include the lines below):
2023.11.15 03:45:54.757607 [ 36409 ] {} <Error> Application: Code: 36. DB::Exception: Expected cache path (`path`) in configuration: While processing disk(type = cache, disk = 'local_disk', name = '$CLICHOUSE_TEST_UNIQUE_NAME', cache_name = 'cache_collection_sql'): Cannot attach table `test_1`.`test` from metadata file /var/lib/clickhouse/store/422/4221c4bb-bc7b-4acc-ab11-f869a9eb4e27/test.sql from query ATTACH TABLE test_1.test UUID 'dc31a6ec-a686-4ea3-ad6c-dee95428e645' (`a` Int32, `b` String) ENGINE = MergeTree ORDER BY a SETTINGS disk = disk(type = cache, disk = 'local_disk', name = '$CLICHOUSE_TEST_UNIQUE_NAME', cache_name = 'cache_collection_sql'), index_granularity = 52318, min_bytes_for_wide_part = 494146144, ratio_of_defaults_for_sparse_serialization = 1., replace_long_file_name_to_hash = 1, max_file_name_length = 112, merge_max_block_size = 21108, prefer_fetch_merged_part_size_threshold = 10737418240, vertical_merge_algorithm_min_rows_to_activate = 1, vertical_merge_algorithm_min_columns_to_activate = 49, min_merge_bytes_to_use_direct_io = 1, index_granularity_bytes = 3308535, allow_vertical_merges_from_compact_to_wide_parts = 1, marks_compress_block_size = 78680, primary_key_compress_block_size = 62883. (BAD_ARGUMENTS), Stack trace (when copying this message, always include the lines below):
2023.11.15 03:45:54.757848 [ 36409 ] {} <Error> Application: DB::Exception: Expected cache path (`path`) in configuration: While processing disk(type = cache, disk = 'local_disk', name = '$CLICHOUSE_TEST_UNIQUE_NAME', cache_name = 'cache_collection_sql'): Cannot attach table `test_1`.`test` from metadata file /var/lib/clickhouse/store/422/4221c4bb-bc7b-4acc-ab11-f869a9eb4e27/test.sql from query ATTACH TABLE test_1.test UUID 'dc31a6ec-a686-4ea3-ad6c-dee95428e645' (`a` Int32, `b` String) ENGINE = MergeTree ORDER BY a SETTINGS disk = disk(type = cache, disk = 'local_disk', name = '$CLICHOUSE_TEST_UNIQUE_NAME', cache_name = 'cache_collection_sql'), index_granularity = 52318, min_bytes_for_wide_part = 494146144, ratio_of_defaults_for_sparse_serialization = 1., replace_long_file_name_to_hash = 1, max_file_name_length = 112, merge_max_block_size = 21108, prefer_fetch_merged_part_size_threshold = 10737418240, vertical_merge_algorithm_min_rows_to_activate = 1, vertical_merge_algorithm_min_columns_to_activate = 49, min_merge_bytes_to_use_direct_io = 1, index_granularity_bytes = 3308535, allow_vertical_merges_from_compact_to_wide_parts = 1, marks_compress_block_size = 78680, primary_key_compress_block_size = 62883

Looks related to #56541

Another problem is that Stress test task fails with check_status.tsv doesn't exists if the server fails to restart. It's because of exit 1 in run_with_retry called from attach_gdb_to_clickhouse:

+ retry=58
+ sleep 5
+ '[' 58 -ge 60 ']'
+ clickhouse-client --query 'SELECT '\''Connected to clickhouse-server after attaching gdb'\'''
Code: 210. DB::NetException: Connection refused (localhost:9000). (NETWORK_ERROR)

+ retry=59
+ sleep 5
+ '[' 59 -ge 60 ']'
+ clickhouse-client --query 'SELECT '\''Connected to clickhouse-server after attaching gdb'\'''
Code: 210. DB::NetException: Connection refused (localhost:9000). (NETWORK_ERROR)

+ retry=60
+ sleep 5
Command 'clickhouse-client --query SELECT 'Connected to clickhouse-server after attaching gdb'' failed after 60 retries, exiting
+ '[' 60 -ge 60 ']'
+ echo 'Command '\''clickhouse-client --query SELECT '\''Connected to clickhouse-server after attaching gdb'\'''\'' failed after 60 retries, exiting'
+ exit 1

We should fix this as well, it should not exit from stress/run.sh

cc: @kssenii

Metadata

Metadata

Assignees

Labels

fuzzProblem found by one of the fuzzers

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions