Skip to content

Add replication lag and recovery time metrics#66703

Merged
mstetsyuk merged 15 commits intomasterfrom
add-replication-lag-and-recovery-time-metrics
Aug 2, 2024
Merged

Add replication lag and recovery time metrics#66703
mstetsyuk merged 15 commits intomasterfrom
add-replication-lag-and-recovery-time-metrics

Conversation

@mstetsyuk
Copy link
Copy Markdown
Member

Changelog category (leave one):

  • Improvement

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Add replication lag and recovery time metrics

Documentation entry for user-facing changes

  • Documentation is written (mandatory for new features)

Information about CI checks: https://clickhouse.com/docs/en/development/continuous-integration/

CI Settings (Only check the boxes if you know what you are doing):

  • Allow: All Required Checks
  • Allow: Stateless tests
  • Allow: Stateful tests
  • Allow: Integration Tests
  • Allow: Performance tests
  • Allow: All Builds
  • Allow: batch 1, 2 for multi-batch jobs
  • Allow: batch 3, 4, 5, 6 for multi-batch jobs

  • Exclude: Style check
  • Exclude: Fast test
  • Exclude: All with ASAN
  • Exclude: All with TSAN, MSAN, UBSAN, Coverage
  • Exclude: All with aarch64, release, debug

  • Do not test
  • Woolen Wolfdog
  • Upload binaries for special builds
  • Disable merge-commit
  • Disable CI cache

@robot-ch-test-poll1 robot-ch-test-poll1 added the pr-improvement Pull request with some product improvements label Jul 18, 2024
@robot-ch-test-poll2
Copy link
Copy Markdown
Contributor

robot-ch-test-poll2 commented Jul 18, 2024

This is an automated comment for commit 01ca36c with description of existing statuses. It's updated for the latest CI running

❌ Click here to open a full report in a separate page

Check nameDescriptionStatus
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts❌ failure
Successful checks
Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help✅ success
BuildsThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
ClickBenchRuns [ClickBench](https://github.com/ClickHouse/ClickBench/) with instant-attach table✅ success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help✅ success
Docker keeper imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docker server imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docs checkBuilds and tests the documentation✅ success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here✅ success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integration tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc✅ success
Install packagesChecks that the built packages are installable in a clear environment✅ success
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests✅ success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests✅ success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors✅ success
Style checkRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report✅ success
Unit testsRuns the unit tests for different release types✅ success

@mstetsyuk mstetsyuk force-pushed the add-replication-lag-and-recovery-time-metrics branch from 6df4b4d to 1ad9765 Compare July 18, 2024 16:36
@mstetsyuk mstetsyuk force-pushed the add-replication-lag-and-recovery-time-metrics branch from 1ad9765 to 7fc8ee7 Compare July 18, 2024 16:38
@tavplubix tavplubix self-assigned this Jul 18, 2024
@mstetsyuk
Copy link
Copy Markdown
Member Author

Failing CI:

  • 02911_backup_restore_keeper_map is unrelated to this PR and is broken (according to furrious bober)

@mstetsyuk mstetsyuk marked this pull request as ready for review July 24, 2024 09:57
@mstetsyuk mstetsyuk requested a review from tavplubix July 24, 2024 11:37
Comment on lines +358 to +359
UInt32 log_ptr = parse<UInt32>(zookeeper->get(fs::path(zookeeper_path) / "replicas" / full_name / "log_ptr"));
bool is_active = zookeeper->exists(fs::path(zookeeper_path) / "replicas" / full_name / "active");
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will work, but it will drastically slow down selects from system.clusters (especially for clusters having a lot of replicas). For each replica, it makes 2 sequential zk requests, so it's 2 * num_of_replicas RTT for each tryGetReplicasInfo call. Let's put all paths in a vector, make one mutli-get request (exists is basically a get request + a check for the error code (ZOK/ZNONODE)), and process the list of responses/error codes

bool DatabaseReplicatedDDLWorker::initializeMainThread()
{
initialization_duration_timer.restart();
initializing.store(true, std::memory_order_release);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a performance-critical part of code, so it's okay to use the default std::memory_order_seq_cst. Using anything besides std::memory_order_seq_cst and std::memory_order_relaxed usually requires a comment explaining why it will work correctly, and I would prefer not to think about this when it's not required for performance reasons

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


UInt64 DatabaseReplicatedDDLWorker::getCurrentInitializationDurationMs() const
{
return initializing.load(std::memory_order_acquire) ? initialization_duration_timer.elapsedMilliseconds() : 0;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's still a (theoretical) race condition:

  • thread X checks initializing here, it's true, but kernel decides to pause this thread for a few seconds for some reason
  • the DDLWorker thread finishes the initialization successfully, but instantly gets an unexpected error and calls initializeMainThread again
  • thread X wakes up and reads start_ns inside initialization_duration_timer.elapsedMilliseconds() at the same time when the DDLWorker thread resets it in initialization_duration_timer.restart()

node = cluster.add_instance(
"node",
main_configs=["configs/config.xml"],
with_zookeeper=True,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

with_zookeeper is redundant since there's embedded Keeper enabled in the server config

node.query("CREATE TABLE rdb.t (x UInt32) ENGINE = MergeTree ORDER BY x;")
node.exec_in_container(["bash", "-c", "rm /var/lib/clickhouse/metadata/rdb/t.sql"])
node.restart_clickhouse()
assert node.query("SELECT any(recovery_time) FROM system.clusters;") != "0\n"
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"NULL\n" != "0\n" is true, so the assert will not fail if all recovery times are NULL

@mstetsyuk mstetsyuk added this pull request to the merge queue Aug 2, 2024
Merged via the queue into master with commit f1df59d Aug 2, 2024
@mstetsyuk mstetsyuk deleted the add-replication-lag-and-recovery-time-metrics branch August 2, 2024 15:52
@robot-ch-test-poll robot-ch-test-poll added the pr-synced-to-cloud The PR is synced to the cloud repo label Aug 2, 2024
@alexey-milovidov
Copy link
Copy Markdown
Member

Reverted. Please resubmit.

@mstetsyuk mstetsyuk restored the add-replication-lag-and-recovery-time-metrics branch August 6, 2024 10:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

pr-improvement Pull request with some product improvements pr-synced-to-cloud The PR is synced to the cloud repo

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants