{"id":2650,"date":"2023-05-29T16:15:17","date_gmt":"2023-05-29T08:15:17","guid":{"rendered":"https:\/\/199604.com\/?p=2650"},"modified":"2023-05-29T16:15:17","modified_gmt":"2023-05-29T08:15:17","slug":"clickhouse%e6%95%b0%e6%8d%ae%e5%ba%93%e5%a4%87%e4%bb%bd%e5%92%8c%e6%81%a2%e5%a4%8d","status":"publish","type":"post","link":"https:\/\/199604.com\/2650","title":{"rendered":"clickhouse\u6570\u636e\u5e93\u5907\u4efd\u548c\u6062\u590d"},"content":{"rendered":"<h1>clickhouse\u5907\u4efd\u548c\u6062\u590d<\/h1>\n<h2>\u4e3a\u4ec0\u4e48\u8981\u5907\u4efdCK<\/h2>\n<p>\u867d\u7136CK\u6709\u590d\u5236\u8868\u4e4b\u7c7b\u7684\u5f15\u64ce\uff0c\u4e5f\u6709\u4e00\u4e9b\u4fdd\u62a4\u63aa\u65bd(\u4e0d\u80fd\u4eba\u5de5\u5220\u9664\u4f7f\u7528\u5e26\u6709MergeTree\u5f15\u64ce\u4e14\u5305\u542b\u8d85\u8fc750Gb\u6570\u636e\u7684\u8868)\uff0c\u4f46\u662f\u65e0\u6cd5\u907f\u514d\u7684\u4eba\u4e3a\u5931\u8bef\uff0c\u4ee5\u53ca\u5012\u9709\u900f\u9876\u7684\u673a\u5668\u96c6\u7fa4\u6545\u969c\uff0c\u6216\u8005\u5408\u89c4\u8981\u6c42\u7b49\uff0c\u4e3a\u4e86\u518d\u52a0\u4e00\u4efd\u4fdd\u9669\uff0c\u5c31\u9700\u8981\u7528\u5230\u53ea\u8981\u662f\u6570\u636e\u5e93\uff0c\u5c31\u6709\u5907\u4efd\u7684\u5fc5\u8981\u6027<\/p>\n<p>\u5907\u4efd\u65b9\u6cd5\u548c\u5de5\u5177\u6765\u6e90\u4e8e\u5b98\u7f51:<code>https:\/\/clickhouse.com\/docs\/zh\/operations\/backup<\/code><\/p>\n<blockquote><p>\n  \u4f7f\u7528\u5efa\u8bae\u5148\u5728\u6d4b\u8bd5\u73af\u5883\u6216\u5f00\u53d1\u73af\u5883\uff0c\u628a\u76f8\u5173\u811a\u672cOK\u4e86\u518d\u4e0a\u751f\u4ea7\u5662~\n<\/p><\/blockquote>\n<h2>\u624b\u5de5\u5907\u4efd<\/h2>\n<p>\u624b\u5de5\u5907\u4efd\uff0c\u662f\u4f7f\u7528CK\u7684<code>ALTER TABLE ... FREEZE PARTITION ...<\/code>\u547d\u4ee4\u6765\u5b9e\u73b0\uff0c\u662f\u5229\u7528\u786c\u94fe\u63a5\u5230\u4e00\u4e2a\u76ee\u5f55(<code>\/var\/lib\/clickhouse\/shadow<\/code>)\uff0c\u6062\u590d\u65f6\u4ece <code>\/var\/lib\/clickhouse\/bakcup<\/code> \u5bfb\u627e\u6307\u5b9a\u7684\u540d\u79f0\u8fdb\u884c\u6062\u590d<\/p>\n<p>\u9700\u8981\u6ce8\u610f\u7684\u662f <code>\u624b\u5de5\u5907\u4efd\u548c\u6062\u590d<\/code> \u90fd\u4e0d\u4f1a\u6d89\u53ca\u5230\u8868\u7ed3\u6784\uff0c\u53ea\u5355\u7eaf\u7684\u5907\u4efd\u548c\u6062\u590d\u6570\u636e\uff0c\u56e0\u6b64\u76f8\u5173\u8868\u7ed3\u6784\u9700\u8981 \u53e6\u5916\u5b58\u6863\u548c\u81ea\u884c\u521b\u5efa<\/p>\n<blockquote><p>\n  \u57fa\u4e8e\u4e0a\u8ff0\u8981\u6c42\uff0c\u5148\u81ea\u884c\u521b\u5efa<code>\/var\/lib\/clickhouse\/{shadow,bakcup}<\/code>\uff0c\u6ce8\u610f\u8fd92\u4e2a\u76ee\u5f55\u6743\u9650\u5c5e\u4e3b\u5c5e\u7ec4\u662fclickhouse<\/p>\n<p>  <strong>\u5982\u679c\u5728\u914d\u7f6e\u6587\u4ef6\u6307\u5b9a\u4e86data\u6570\u636e\u76ee\u5f55\uff0c\u5219\u662f\u5728\u6307\u5b9a\u76ee\u5f55\u4e0b\u521b\u5efa<\/strong>\n<\/p><\/blockquote>\n<h3><strong>\u5907\u4efd\u4f8b\u5b50<\/strong><\/h3>\n<pre><code class=\"language-shell \">alter table tableName freeze \n#\u4e0d\u8fdb\u5165\u4ea4\u4e92\u6a21\u5f0f\uff0c\u76f4\u63a5\u8fd0\u884c\necho 'alter table tableName freeze ' | clickhouse-client -u xxxx --password  xxxx\n<\/code><\/pre>\n<h3><strong>\u6062\u590d\u4f8b\u5b50<\/strong><\/h3>\n<pre><code class=\"language-shell \">alter table tableName attach partition Namexxx \n#\u4e0d\u8fdb\u5165\u4ea4\u4e92\u6a21\u5f0f\uff0c\u76f4\u63a5\u8fd0\u884c\necho 'alter table tableName attach partition Namexxx ' | clickhouse-client -u xxxx --password  xxxx\n\n#\u67e5\u770b\u6062\u590d\u8868\u6570\u636e\necho 'select count() from tableName' | clickhouse-client -u xxxx --password  xxxx\n<\/code><\/pre>\n<h2>\u5de5\u5177\u5907\u4efd<\/h2>\n<p>\u5b98\u65b9\u63a8\u8350\u7684\u5de5\u5177 <code>clickhouse-backup<\/code>\uff0c\u5730\u5740:<code>https:\/\/github.com\/AlexAkulov\/clickhouse-backup<\/code><\/p>\n<p>\u6b64\u5de5\u5177\u4e0d\u4f46\u53ef\u4ee5\u5907\u4efd\u6570\u636e\uff0c\u8fd8\u53ef\u4ee5\u5907\u4efd\u8868\u7ed3\u6784\uff0c\u53e6\u5916\u8fd8\u652f\u6301\u5907\u4efd\u5230\u7b2c\u4e09\u65b9\u5b58\u50a8\uff0c\u6bd4\u5982S3(OSS)<\/p>\n<p>\u4ecereleases\u4e0b\u8f7d\u4e00\u4e2a\u5305\u65e2\u53ef\uff0crpm\u5305\u7684\u76f4\u63a5<code>rpm -ivh<\/code> \u5b89\u88c5\u5c31\u53ef\u4ee5\u5168\u5c40\u4f7f\u7528\u4e86\uff0c\u8fd9\u91cc\u6211\u4f7f\u7528\u7684tar\u5305\u3002<\/p>\n<p><div class='fancybox-wrapper lazyload-container-unload' data-fancybox='post-images' href='https:\/\/qn.199604.com\/typoraImg\/image-20230529151710423.png'><img class=\"lazyload lazyload-style-1\" src=\"data:image\/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+\"  decoding=\"async\" data-original=\"https:\/\/qn.199604.com\/typoraImg\/image-20230529151710423.png\" src=\"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsQAAA7EAZUrDhsAAAANSURBVBhXYzh8+PB\/AAffA0nNPuCLAAAAAElFTkSuQmCC\" alt=\"image-20230529151710423\" \/><\/div><\/p>\n<h3>\u5b89\u88c5\u90e8\u7f72<\/h3>\n<pre><code class=\"language-shell \">tar -zxvf clickhouse-backup.tar.gz # \u89e3\u538b\ncd clickhouse-backup\ncp clickhouse-backup \/usr\/local\/bin\/   # \u5c06\u53ef\u6267\u884c\u6587\u4ef6\u590d\u5236\u5230\/usr\/local\/bin\nmkdir \/etc\/clickhouse-backup # \/etc\u4e0b\u521b\u5efa\u4e00\u4e2a\u76ee\u5f55\uff0c\u7528\u6765\u653e\u7f6e\u914d\u7f6e\u6587\u4ef6config.yml\ncp config.yml \/etc\/clickhouse-backup\/\n<\/code><\/pre>\n<h4>\u914d\u7f6e\u6587\u4ef6<code>config.yml<\/code>\u5982\u4e0b<\/h4>\n<pre><code class=\"language-shell \">general:\n  remote_storage: none           # REMOTE_STORAGE, if `none` then `upload` and  `download` command will fail\n  max_file_size: 1099511627776      # MAX_FILE_SIZE, 1G by default, useless when upload_by_part is true, use for split data parts files by archives\n  disable_progress_bar: true     # DISABLE_PROGRESS_BAR, show progress bar during upload and download, makes sense only when `upload_concurrency` and `download_concurrency` is 1\n  backups_to_keep_local: 1       # BACKUPS_TO_KEEP_LOCAL, how many latest local backup should be kept, 0 means all created backups will be stored on local disk\n                                 # You shall run `clickhouse-backup delete local &lt;backup_name&gt;` command to remove temporary backup files from the local disk\n  backups_to_keep_remote: 2      # BACKUPS_TO_KEEP_REMOTE, how many latest backup should be kept on remote storage, 0 means all uploaded backups will be stored on remote storage.\n                                 # If old backups are required for newer incremental backup then it won't be deleted. Be careful with long incremental backup sequences.\n  log_level: info                # LOG_LEVEL, a choice from `debug`, `info`, `warn`, `error`\n  allow_empty_backups: false     # ALLOW_EMPTY_BACKUPS\n  # concurrency means parallel tables and parallel parts inside tables\n  # for example 4 means max 4 parallel tables and 4 parallel parts inside one table, so equals 16 concurrent streams\n  download_concurrency: 1        # DOWNLOAD_CONCURRENCY, max 255, by default, the value is round(sqrt(AVAILABLE_CPU_CORES \/ 2))\n  upload_concurrency: 1          # UPLOAD_CONCURRENCY, max 255, by default, the value is round(sqrt(AVAILABLE_CPU_CORES \/ 2))\n\n  # RESTORE_SCHEMA_ON_CLUSTER, execute all schema related SQL queries with `ON CLUSTER` clause as Distributed DDL.\n  # Check `system.clusters` table for the correct cluster name, also `system.macros` can be used.\n  # This isn't applicable when `use_embedded_backup_restore: true`\n  restore_schema_on_cluster: \"\"\n  upload_by_part: true           # UPLOAD_BY_PART\n  download_by_part: true         # DOWNLOAD_BY_PART\n  use_resumable_state: true      # USE_RESUMABLE_STATE, allow resume upload and download according to the &lt;backup_name&gt;.resumable file\n\n  # RESTORE_DATABASE_MAPPING, restore rules from backup databases to target databases, which is useful when changing destination database, all atomic tables will be created with new UUIDs.\n  # The format for this env variable is \"src_db1:target_db1,src_db2:target_db2\". For YAML please continue using map syntax\n  restore_database_mapping: {}\n  retries_on_failure: 3          # RETRIES_ON_FAILURE, how many times to retry after a failure during upload or download\n  retries_pause: 30s             # RETRIES_PAUSE, duration time to pause after each download or upload failure\nclickhouse:\n  username: default                # CLICKHOUSE_USERNAME\n  password: \"\"                     # CLICKHOUSE_PASSWORD\n  host: localhost                  # CLICKHOUSE_HOST, to backup data clickhouse-backup requires access to the same file system as clickhouse-server, to host could be address of other docker container on the same machine, or IP address binded on some network interface on the same host.\n  port: 9000                       # CLICKHOUSE_PORT, don't use 8123, clickhouse-backup doesn't support HTTP protocol\n  # CLICKHOUSE_DISK_MAPPING, use this mapping when your `system.disks` are different between the source and destination clusters during backup and restore process\n  # The format for this env variable is \"disk_name1:disk_path1,disk_name2:disk_path2\". For YAML please continue using map syntax\n  disk_mapping: {}\n  # CLICKHOUSE_SKIP_TABLES, the list of tables (pattern are allowed) which are ignored during backup and restore process\n  # The format for this env variable is \"pattern1,pattern2,pattern3\". For YAML please continue using map syntax\n  skip_tables:\n    - system.*\n    - INFORMATION_SCHEMA.*\n    - information_schema.*\n  timeout: 5m                  # CLICKHOUSE_TIMEOUT\n  freeze_by_part: false        # CLICKHOUSE_FREEZE_BY_PART, allow freezing by part instead of freezing the whole table\n  freeze_by_part_where: \"\"     # CLICKHOUSE_FREEZE_BY_PART_WHERE, allow parts filtering during freezing when freeze_by_part: true\n  secure: false                # CLICKHOUSE_SECURE, use TLS encryption for connection\n  skip_verify: false           # CLICKHOUSE_SKIP_VERIFY, skip certificate verification and allow potential certificate warnings\n  sync_replicated_tables: true # CLICKHOUSE_SYNC_REPLICATED_TABLES\n  tls_key: \"\"                  # CLICKHOUSE_TLS_KEY, filename with TLS key file\n  tls_cert: \"\"                 # CLICKHOUSE_TLS_CERT, filename with TLS certificate file\n  tls_ca: \"\"                   # CLICKHOUSE_TLS_CA, filename with TLS custom authority file\n  log_sql_queries: true        # CLICKHOUSE_LOG_SQL_QUERIES, enable logging `clickhouse-backup` SQL queries on `system.query_log` table inside clickhouse-server\n  debug: false                 # CLICKHOUSE_DEBUG\n  config_dir:      \"\/etc\/clickhouse-server\"              # CLICKHOUSE_CONFIG_DIR\n  restart_command: \"systemctl restart clickhouse-server\" # CLICKHOUSE_RESTART_COMMAND, use this command when restoring with --rbac or --config options\n  ignore_not_exists_error_during_freeze: true # CLICKHOUSE_IGNORE_NOT_EXISTS_ERROR_DURING_FREEZE, helps to avoid backup failures when running frequent CREATE \/ DROP tables and databases during backup, `clickhouse-backup` will ignore `code: 60` and `code: 81` errors during execution of `ALTER TABLE ... FREEZE`\n  check_replicas_before_attach: true # CLICKHOUSE_CHECK_REPLICAS_BEFORE_ATTACH, helps avoiding concurrent ATTACH PART execution when restoring ReplicatedMergeTree tables\n  use_embedded_backup_restore: false # CLICKHOUSE_USE_EMBEDDED_BACKUP_RESTORE, use BACKUP \/ RESTORE SQL statements instead of regular SQL queries to use features of modern ClickHouse server versions\nazblob:\n  endpoint_suffix: \"core.windows.net\" # AZBLOB_ENDPOINT_SUFFIX\n  account_name: \"\"             # AZBLOB_ACCOUNT_NAME\n  account_key: \"\"              # AZBLOB_ACCOUNT_KEY\n  sas: \"\"                      # AZBLOB_SAS\n  use_managed_identity: false  # AZBLOB_USE_MANAGED_IDENTITY\n  container: \"\"                # AZBLOB_CONTAINER\n  path: \"\"                     # AZBLOB_PATH, `system.macros` values could be applied as {macro_name}\n  compression_level: 1         # AZBLOB_COMPRESSION_LEVEL\n  compression_format: tar      # AZBLOB_COMPRESSION_FORMAT, allowed values tar, lz4, bzip2, gzip, sz, xz, brortli, zstd, `none` for upload data part folders as is\n  sse_key: \"\"                  # AZBLOB_SSE_KEY\n  buffer_size: 0               # AZBLOB_BUFFER_SIZE, if less or eq 0 then it is calculated as max_file_size \/ max_parts_count, between 2Mb and 4Mb\n  max_parts_count: 10000       # AZBLOB_MAX_PARTS_COUNT, number of parts for AZBLOB uploads, for properly calculate buffer size\n  max_buffers: 3               # AZBLOB_MAX_BUFFERS\ns3:\n  access_key: \"\"                   # S3_ACCESS_KEY\n  secret_key: \"\"                   # S3_SECRET_KEY\n  bucket: \"\"                       # S3_BUCKET\n  endpoint: \"\"                     # S3_ENDPOINT\n  region: us-east-1                # S3_REGION\n  acl: private                     # S3_ACL\n  assume_role_arn: \"\"              # S3_ASSUME_ROLE_ARN\n  force_path_style: false          # S3_FORCE_PATH_STYLE\n  path: \"\"                         # S3_PATH, `system.macros` values could be applied as {macro_name}\n  disable_ssl: false               # S3_DISABLE_SSL\n  compression_level: 1             # S3_COMPRESSION_LEVEL\n  compression_format: tar          # S3_COMPRESSION_FORMAT, allowed values tar, lz4, bzip2, gzip, sz, xz, brortli, zstd, `none` for upload data part folders as is\n  # look details in https:\/\/docs.aws.amazon.com\/AmazonS3\/latest\/userguide\/UsingKMSEncryption.html\n  sse: \"\"                          # S3_SSE, empty (default), AES256, or aws:kms\n  sse_kms_key_id: \"\"               # S3_SSE_KMS_KEY_ID, if S3_SSE is aws:kms then specifies the ID of the Amazon Web Services Key Management Service\n  sse_customer_algorithm: \"\"       # S3_SSE_CUSTOMER_ALGORITHM, encryption algorithm, for example, AES256\n  sse_customer_key: \"\"             # S3_SSE_CUSTOMER_KEY, customer-provided encryption key\n  sse_customer_key_md5: \"\"         # S3_SSE_CUSTOMER_KEY_MD5, 128-bit MD5 digest of the encryption key according to RFC 1321\n  sse_kms_encryption_context: \"\"   # S3_SSE_KMS_ENCRYPTION_CONTEXT, base64-encoded UTF-8 string holding a JSON with the encryption context\n                                   # Specifies the Amazon Web Services KMS Encryption Context to use for object encryption.\n                                   # This is a collection of non-secret key-value pairs that represent additional authenticated data.\n                                   # When you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match)\n                                   # encryption context to decrypt the data. An encryption context is supported only on operations with symmetric encryption KMS keys\n  disable_cert_verification: false # S3_DISABLE_CERT_VERIFICATION\n  use_custom_storage_class: false  # S3_USE_CUSTOM_STORAGE_CLASS\n  storage_class: STANDARD          # S3_STORAGE_CLASS, by default allow only from list https:\/\/github.com\/aws\/aws-sdk-go-v2\/blob\/main\/service\/s3\/types\/enums.go#L787-L799\n  concurrency: 1                   # S3_CONCURRENCY\n  part_size: 0                     # S3_PART_SIZE, if less or eq 0 then it is calculated as max_file_size \/ max_parts_count, between 5MB and 5Gb\n  max_parts_count: 10000           # S3_MAX_PARTS_COUNT, number of parts for S3 multipart uploads\n  allow_multipart_download: false  # S3_ALLOW_MULTIPART_DOWNLOAD, allow faster download and upload speeds, but will require additional disk space, download_concurrency * part size in worst case\n\n  # S3_OBJECT_LABELS, allow setup metadata for each object during upload, use {macro_name} from system.macros and {backupName} for current backup name\n  # The format for this env variable is \"key1:value1,key2:value2\". For YAML please continue using map syntax\n  object_labels: {}\n  # S3_CUSTOM_STORAGE_CLASS_MAP, allow setup storage class depending on the backup name regexp pattern, format nameRegexp &gt; className\n  custom_storage_class_map: {}\n  debug: false                     # S3_DEBUG\ngcs:\n  credentials_file: \"\"         # GCS_CREDENTIALS_FILE\n  credentials_json: \"\"         # GCS_CREDENTIALS_JSON\n  credentials_json_encoded: \"\" # GCS_CREDENTIALS_JSON_ENCODED\n  bucket: \"\"                   # GCS_BUCKET\n  path: \"\"                     # GCS_PATH, `system.macros` values could be applied as {macro_name}\n  compression_level: 1         # GCS_COMPRESSION_LEVEL\n  compression_format: tar      # GCS_COMPRESSION_FORMAT, allowed values tar, lz4, bzip2, gzip, sz, xz, brortli, zstd, `none` for upload data part folders as is\n  storage_class: STANDARD      # GCS_STORAGE_CLASS\n\n  # GCS_OBJECT_LABELS, allow setup metadata for each object during upload, use {macro_name} from system.macros and {backupName} for current backup name\n  # The format for this env variable is \"key1:value1,key2:value2\". For YAML please continue using map syntax\n  object_labels: {}\n  # GCS_CUSTOM_STORAGE_CLASS_MAP, allow setup storage class depends on backup name regexp pattern, format nameRegexp &gt; className\n  custom_storage_class_map: {}\n  debug: false                 # GCS_DEBUG\ncos:\n  url: \"\"                      # COS_URL\n  timeout: 2m                  # COS_TIMEOUT\n  secret_id: \"\"                # COS_SECRET_ID\n  secret_key: \"\"               # COS_SECRET_KEY\n  path: \"\"                     # COS_PATH, `system.macros` values could be applied as {macro_name}\n  compression_format: tar      # COS_COMPRESSION_FORMAT, allowed values tar, lz4, bzip2, gzip, sz, xz, brortli, zstd, `none` for upload data part folders as is\n  compression_level: 1         # COS_COMPRESSION_LEVEL\nftp:\n  address: \"\"                  # FTP_ADDRESS\n  timeout: 2m                  # FTP_TIMEOUT\n  username: \"\"                 # FTP_USERNAME\n  password: \"\"                 # FTP_PASSWORD\n  tls: false                   # FTP_TLS\n  path: \"\"                     # FTP_PATH, `system.macros` values could be applied as {macro_name}\n  compression_format: tar      # FTP_COMPRESSION_FORMAT, allowed values tar, lz4, bzip2, gzip, sz, xz, brortli, zstd, `none` for upload data part folders as is\n  compression_level: 1         # FTP_COMPRESSION_LEVEL\n  debug: false                 # FTP_DEBUG\nsftp:\n  address: \"\"                  # SFTP_ADDRESS\n  username: \"\"                 # SFTP_USERNAME\n  password: \"\"                 # SFTP_PASSWORD\n  key: \"\"                      # SFTP_KEY\n  path: \"\"                     # SFTP_PATH, `system.macros` values could be applied as {macro_name}\n  concurrency: 1               # SFTP_CONCURRENCY\n  compression_format: tar      # SFTP_COMPRESSION_FORMAT, allowed values tar, lz4, bzip2, gzip, sz, xz, brortli, zstd, `none` for upload data part folders as is\n  compression_level: 1         # SFTP_COMPRESSION_LEVEL\n  debug: false                 # SFTP_DEBUG\ncustom:\n  upload_command: \"\"           # CUSTOM_UPLOAD_COMMAND\n  download_command: \"\"         # CUSTOM_DOWNLOAD_COMMAND\n  delete_command: \"\"           # CUSTOM_DELETE_COMMAND\n  list_command: \"\"             # CUSTOM_LIST_COMMAND\n  command_timeout: \"4h\"          # CUSTOM_COMMAND_TIMEOUT\napi:\n  listen: \"localhost:7171\"     # API_LISTEN\n  enable_metrics: true         # API_ENABLE_METRICS\n  enable_pprof: false          # API_ENABLE_PPROF\n  username: \"\"                 # API_USERNAME, basic authorization for API endpoint\n  password: \"\"                 # API_PASSWORD\n  secure: false                # API_SECURE, use TLS for listen API socket\n  certificate_file: \"\"         # API_CERTIFICATE_FILE\n  private_key_file: \"\"         # API_PRIVATE_KEY_FILE\n  integration_tables_host: \"\"  # API_INTEGRATION_TABLES_HOST, allow using DNS name to connect in `system.backup_list` and `system.backup_actions`\n  allow_parallel: false        # API_ALLOW_PARALLEL, enable parallel operations, this allows for significant memory allocation and spawns go-routines, don't enable it if you are not sure\n  create_integration_tables: false # API_CREATE_INTEGRATION_TABLES, create `system.backup_list` and `system.backup_actions`\n  complete_resumable_after_restart: true # API_COMPLETE_RESUMABLE_AFTER_RESTART, after API server startup, if `\/var\/lib\/clickhouse\/backup\/*\/(upload|download).state` present, then operation will continue in the background\n\n<\/code><\/pre>\n<h2>\u672c\u5730\u548c\u8fdc\u7a0b\u521b\u5efa\u5907\u4efd<\/h2>\n<h3>\u5907\u4efd\u6570\u636e\u5230\u672c\u5730<\/h3>\n<p>\u56e0\u4e3a\u6211\u8fd9\u91cc\u4f7f\u7528\u7684tar\u5305\uff0c\u5728\u6267\u884c\u547d\u4ee4\u65f6\u9700\u8981\u6307\u5b9a\u914d\u7f6e\u6587\u4ef6\uff0c\u5982\u4e0d\u60f3\u6307\u5b9a\uff0c\u5c06\u914d\u7f6e\u6587\u4ef6\u653e\u5230<code>\/etc\/clickhouse-backup\/config.yml<\/code>\u65e2\u53ef\uff0c\u9ed8\u8ba4\u5907\u4efd\u5230<code>\/var\/lib\/clickhouse\/backup<\/code>\u8be5\u8def\u5f84\u4e0b<\/p>\n<blockquote><p>\n  <strong>\u5982\u679c\u5728\u914d\u7f6e\u6587\u4ef6\u6307\u5b9a\u4e86data\u6570\u636e\u76ee\u5f55\uff0c\u5219\u662f\u5728\u6307\u5b9a\u76ee\u5f55\u4e0b\u521b\u5efa<\/strong>\uff0c\u90a3\u5c31\u662f<code>path\/clickhouse\/backup<\/code>\n<\/p><\/blockquote>\n<h4><strong>\u67e5\u770b\u53ef\u4ee5\u5907\u4efd\u7684\u8868<\/strong><\/h4>\n<p><code>\/usr\/local\/bin\/clickhouse-backup tables<\/code><\/p>\n<p><div class='fancybox-wrapper lazyload-container-unload' data-fancybox='post-images' href='https:\/\/qn.199604.com\/typoraImg\/image-20230529152428024.png'><img class=\"lazyload lazyload-style-1\" src=\"data:image\/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+\"  decoding=\"async\" data-original=\"https:\/\/qn.199604.com\/typoraImg\/image-20230529152428024.png\" src=\"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsQAAA7EAZUrDhsAAAANSURBVBhXYzh8+PB\/AAffA0nNPuCLAAAAAElFTkSuQmCC\" alt=\"image-20230529152428024\" \/><\/div><\/p>\n<h4><strong>\u5907\u4efd\u547d\u4ee4<\/strong><\/h4>\n<pre><code class=\"language-shell \">#\u5907\u4efd\u5168\u90e8\u8868\n\/usr\/local\/bin\/clickhouse-backup -c config.yml create \n#\u5907\u4efd\u5168\u90e8\u8868\uff0c\u5e76\u4e3a\u6b64\u6b21\u5907\u4efd\u6307\u5b9a\u540d\u79f0\uff0c\uff0c\u9ed8\u8ba4\u4ee5\u5f53\u524d\u65f6\u95f4\uff0c\u4ee5\u6587\u4ef6\u5939\u65b9\u5f0f\u5b58\u5728\n\/usr\/local\/bin\/clickhouse-backup -c config.yml create bak_Namexxx\n#\u5907\u4efd\u6307\u5b9a\u8868\n\/usr\/local\/bin\/clickhouse-backup -c config.yml create -t default.tableName bak_Namexxx\n#\u5907\u4efd\u591a\u8868\n\/usr\/local\/bin\/clickhouse-backup -c config.yml create -t default.tableName1,default.tableName2 bak_Namexxx\n\n#\u67e5\u770b\u5907\u4efd\u5217\u8868\n\/usr\/local\/bin\/clickhouse-backup -c config.yml list\n#\u5220\u9664\u67d0\u6b21\u5907\u4efd\n\/usr\/local\/bin\/clickhouse-backup -c config.yml delete local bak_Namexxx\n<\/code><\/pre>\n<h4><strong>\u6062\u590d\u547d\u4ee4<\/strong><\/h4>\n<pre><code class=\"language-shell \">#\u6062\u590d\u67d0\u4e2a\u5907\u4efd\u540d\u79f0 \u6570\u636e\u548c\u8868\u7ed3\u6784\n\/usr\/local\/bin\/clickhouse-backup -c config.yml restore bak_Namexxx\n#\u6307\u5b9a\u8868\u8fd8\u539f\u6570\u636e \u548c\u8868\u7ed3\u6784\n\/usr\/local\/bin\/clickhouse-backup -c config.yml restore -t default.tableName bak_Namexxx\n#\u8fd8\u539f\u8868\u7ed3\u6784\n\/usr\/local\/bin\/clickhouse-backup -c config.yml restore -s \n#\u8fd8\u539f\u6570\u636e\n\/usr\/local\/bin\/clickhouse-backup -c config.yml restore -d \n<\/code><\/pre>\n<h3>\u8fdc\u7a0b\u521b\u5efa\u5907\u4efd<\/h3>\n<p><code>clickhouse-backup<\/code>\u5907\u4efd\u5de5\u5177\u652f\u6301\u597d\u51e0\u79cd\u8fdc\u7a0b\uff0c\u5982s3\/ftp\/sftp\u7b49\u7b49\uff1b\u5982\u679c\u9700\u8981\u8fdc\u7a0b\u5907\u4efd\u9700\u8981\u4fee\u6539config.yml\u6587\u4ef6\u4e2d\u7684<code>remote_storage<\/code>\u53c2\u6570\uff0c\u5177\u4f53\u770b\u5b98\u65b9\u6587\u6863\u4ecb\u7ecd<\/p>\n<pre><code class=\"language-shell \">#\u521b\u5efa\u8fdc\u7a0b\u5907\u4efd\n\/usr\/local\/bin\/clickhouse-backup -c config.yml update\n#\u4e0b\u8f7d\u8fdc\u7a0b\u5907\u4efd\u5230\u672c\u5730\n\/usr\/local\/bin\/clickhouse-backup -c config.yml download\n#\u6062\u590d\u8fdc\u7a0b\u5907\u4efd\n\/usr\/local\/bin\/clickhouse-backup -c config.yml restore_remote\n\n#----------------------------------------------------------------------\n# \u4f7f\u7528scp\uff0c\u5c06\u5907\u4efd\u4e0a\u4f20\u5230\u670d\u52a1\u5668\u4e0a\uff0c\u5982\u679cconfig.yml\u4e2d\u8bbe\u7f6e\u4e86sftp\uff0c\u5219\u4e0d\u9700\u8981\u8fd9\u4e00\u6b65\nscp -rp \/var\/lib\/clickhouse\/backup\/\u5907\u4efd\u540d name@host:\/data\/clickhouse-backup\/\n\n# \u4f46\u5982\u679c\u901a\u8fc7\u914d\u7f6e\u6587\u4ef6\u91cc\u7684sftp\u4e0a\u4f20\u5907\u4efd\u5230\u670d\u52a1\u5668\n\/usr\/local\/bin\/clickhouse-backup upload bak_Namexxx\n<\/code><\/pre>\n<h2>crontab\u5b9a\u65f6\u5907\u4efd<\/h2>\n<p><code>clickhouse_backup.sh<\/code><\/p>\n<pre><code class=\"language-shell \">#!\/bin\/bash\nBACKUP_NAME=ch_backup_$(date +%Y-%m-%dT%H-%M-%S)\n\n\/usr\/local\/bin\/clickhouse-backup create $BACKUP_NAME  #\u672c\u5730\u5907\u4efd\n\/usr\/local\/bin\/clickhouse-backup upload $BACKUP_NAME  # \u672c\u5730\u5907\u4efd\u4e4b\u540e\uff0c\u4e0a\u4f20\u5230\u8fdc\u7a0b\u670d\u52a1\u5668\n<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>clickhouse\u5907\u4efd\u548c\u6062\u590d \u4e3a\u4ec0\u4e48\u8981\u5907\u4efdCK \u867d\u7136CK\u6709\u590d\u5236\u8868\u4e4b\u7c7b\u7684\u5f15\u64ce\uff0c\u4e5f\u6709\u4e00\u4e9b\u4fdd\u62a4\u63aa\u65bd(\u4e0d\u80fd\u4eba\u5de5\u5220\u9664\u4f7f [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[308],"tags":[309],"class_list":["post-2650","post","type-post","status-publish","format-standard","hentry","category-clickhouse","tag-clickhouse"],"_links":{"self":[{"href":"https:\/\/199604.com\/wp-json\/wp\/v2\/posts\/2650","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/199604.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/199604.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/199604.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/199604.com\/wp-json\/wp\/v2\/comments?post=2650"}],"version-history":[{"count":1,"href":"https:\/\/199604.com\/wp-json\/wp\/v2\/posts\/2650\/revisions"}],"predecessor-version":[{"id":2651,"href":"https:\/\/199604.com\/wp-json\/wp\/v2\/posts\/2650\/revisions\/2651"}],"wp:attachment":[{"href":"https:\/\/199604.com\/wp-json\/wp\/v2\/media?parent=2650"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/199604.com\/wp-json\/wp\/v2\/categories?post=2650"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/199604.com\/wp-json\/wp\/v2\/tags?post=2650"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}