Skip to content

[Bug]: Unable to PITR with barman plugin #7894

@BaldFabi

Description

@BaldFabi

Is there an existing issue already for this bug?

  • I have searched for an existing issue, and could not find anything. I believe this is a new bug.

I have read the troubleshooting guide

  • I have read the troubleshooting guide and I think this is a new bug.

I am running a supported version of CloudNativePG

  • I have read the troubleshooting guide and I think this is a new bug.

Contact Details

No response

Version

1.26 (latest patch)

What version of Kubernetes are you using?

1.31

What is your Kubernetes environment?

Self-managed: k3s

How did you install the operator?

Helm

What happened?

I've deployed cloudnative-pg in a test env with a test minio instance on another host. So no worries about the leakage of any tokens.

You can see (comments in the manifest) what I used to deploy a cluster with backups which worked without any problems.
A base backup and also wal archives where successfully created/stored in the s3 bucket.

Image
Image

To test the backup and restore scenario I tried to restore the cluster with the following manifest.
I deleted manually the cluster definition in kubernetes and applied the manifest again to trigger the restore.
In the attached log you can see a couple of errors but the following line is just an info:

{"level":"info","ts":"2025-06-25T21:29:39.621255552Z","logger":"pg_ctl","msg":"2025-06-25 21:29:39.617 UTC [30] FATAL:  configuration file \"/var/lib/postgresql/data/pgdata/custom.conf\" contains errors","pipe":"stdout","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery"}

(Just wanted to point this line out. I don't know if this is the reason for the failing recovery)

The restored cluster doesn't start successfully

NAME                                                         READY   STATUS   RESTARTS   AGE
my-great-cluster-deployment-restored-1-full-recovery-6nhpb   0/2     Error    0          22m
my-great-cluster-deployment-restored-1-full-recovery-8qppb   0/2     Error    0          23m
my-great-cluster-deployment-restored-1-full-recovery-9dtb8   0/2     Error    0          11m
my-great-cluster-deployment-restored-1-full-recovery-gpwng   0/2     Error    0          23m
my-great-cluster-deployment-restored-1-full-recovery-gvx7x   0/2     Error    0          16m
my-great-cluster-deployment-restored-1-full-recovery-mbqvv   0/2     Error    0          21m

Currently, it looks like that it's impossible to restore this test deployment and for me it seems like a bug.
If i omit the recoverTarget and targetTime a couple of errors appear but the restore is successful in the end.

Cluster resource

---
apiVersion: v1
kind: Namespace
metadata:
  name: cloudnative-pg
---
apiVersion: v1
kind: Secret
metadata:
  name: cloudnative-pg-initdb-secret
  namespace: cloudnative-pg
data:
  username: dGVzdHVzZXI= # testuser
  password: QWJjMTIzNF8= # Abc1234_
---
apiVersion: v1
kind: Secret
metadata:
  name: cloudnative-pg-superuser-secret
  namespace: cloudnative-pg
data:
  username: cG9zdGdyZXM= # postgres
  password: QWJjMTIzNF8= # Abc1234_
---
apiVersion: v1
kind: Secret
metadata:
  name: cloudnative-pg-backup-secret
  namespace: cloudnative-pg
data:
  accessKey: QUgwYXF2dmZYZ1hxNFpPQ2R3amE=
  secretKey: Tnhac2lmOHpGU0FGdFJDQnN6RlAwejJKTFVqRktRYmpRN0RLbE5ENg==
---
apiVersion: barmancloud.cnpg.io/v1
kind: ObjectStore
metadata:
  name: my-great-cluster-deployment-object-store
  namespace: cloudnative-pg
spec:
  #retentionPolicy: 30d
  configuration:
    endpointURL: http://10.50.0.2:9000
    destinationPath: s3://test-cloudnative-pg/
    s3Credentials:
      accessKeyId:
        name: cloudnative-pg-backup-secret
        key: accessKey
      secretAccessKey:
        name: cloudnative-pg-backup-secret
        key: secretKey
    wal:
      encryption:
      compression: gzip
    data:
      encryption:
      compression: gzip
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  #name: my-great-cluster-deployment
  name: my-great-cluster-deployment-restored
  namespace: cloudnative-pg
spec:
  instances: 1
  superuserSecret:
    name: cloudnative-pg-superuser-secret
  storage:
    size: 10Gi
  plugins:
    - name: barman-cloud.cloudnative-pg.io
      #isWALArchiver: true
      parameters:
        barmanObjectName: my-great-cluster-deployment-object-store
  bootstrap:
    #initdb:
    #  database: testdb
    #  owner: testuser
    #  secret:
    #    name: cloudnative-pg-initdb-secret
    recovery:
      source: recover
      database: testdb
      owner: testuser
      secret:
        name: cloudnative-pg-initdb-secret
      recoveryTarget:
        targetTime: "2025-06-25T21:20:00Z"
  externalClusters:
    - name: recover
      plugin:
        name: barman-cloud.cloudnative-pg.io
        parameters:
          barmanObjectName: my-great-cluster-deployment-object-store
          serverName: my-great-cluster-deployment
---
apiVersion: postgresql.cnpg.io/v1
kind: Backup
metadata:
  name: initial-backup
  namespace: cloudnative-pg
spec:
  cluster:
    name: my-great-cluster-deployment-restored
  method: plugin
  pluginConfiguration:
    name: barman-cloud.cloudnative-pg.io

Relevant log output

(copied out of k9s)

plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:25.764163652Z","msg":"Starting barman cloud instance plugin","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery"}
plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:25.765401323Z","logger":"controller-runtime.metrics","msg":"Starting metrics server"}
plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:25.765642343Z","logger":"controller-runtime.metrics","msg":"Serving metrics server","bindAddress":":8080","secure":false}
plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:25.765755401Z","msg":"Starting plugin listener","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","protocol":"unix","socketName":"/plugins/barman-cloud.cloudnative-pg.io"}
plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:25.765815007Z","msg":"TCP server not active, skipping TLSCerts generation","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery"}
plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:25.765863758Z","msg":"Starting plugin","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","name":"barman-cloud.cloudnative-pg.io","displayName":"BarmanCloudInstance","version":"0.5.0"}
plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:34.962669233Z","msg":"barman-cloud-check-wal-archive checking the first wal","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery"}
plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:36.003081049Z","msg":"Recovering from external cluster","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","serverName":"my-great-cluster-deployment","objectStore":{"s3Credentials":{"accessKeyId":{"name":"cloudnative-pg-backup-secret","key":"accessKey"},"secretAccessKey":{"name":"cloudnative-pg-backup-secret","key":"secretKey"}},"endpointURL":"http://10.50.0.2:9000","destinationPath":"s3://test-cloudnative-pg/","wal":{"compression":"gzip"},"data":{"compression":"gzip"}}}
full-recovery {"level":"info","ts":"2025-06-25T21:29:34.847550005Z","msg":"Starting webserver","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","address":"localhost:8010","hasTLS":false}
full-recovery {"level":"info","ts":"2025-06-25T21:29:34.949114287Z","msg":"pg_controldata check on existing directory succeeded, renaming the folders","pgdata":"/var/lib/postgresql/data/pgdata","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","out":"pg_control version number:            1700\nCatalog version number:               202406281\nDatabase system identifier:           7519999403269419028\nDatabase cluster state:               in production\npg_control last modified:             Wed 25 Jun 2025 09:18:48 PM UTC\nLatest checkpoint location:           0/3000080\nLatest checkpoint's REDO location:    0/3000028\nLatest checkpoint's REDO WAL file:    000000010000000000000003\nLatest checkpoint's TimeLineID:       1\nLatest checkpoint's PrevTimeLineID:   1\nLatest checkpoint's full_page_writes: on\nLatest checkpoint's NextXID:          0:742\nLatest checkpoint's NextOID:          24578\nLatest checkpoint's NextMultiXactId:  1\nLatest checkpoint's NextMultiOffset:  0\nLatest checkpoint's oldestXID:        730\nLatest checkpoint's oldestXID's DB:   1\nLatest checkpoint's oldestActiveXID:  742\nLatest checkpoint's oldestMultiXid:   1\nLatest checkpoint's oldestMulti's DB: 1\nLatest checkpoint's oldestCommitTsXid:0\nLatest checkpoint's newestCommitTsXid:0\nTime of latest checkpoint:            Wed 25 Jun 2025 09:18:46 PM UTC\nFake LSN counter for unlogged rels:   0/3E8\nMinimum recovery ending location:     0/0\nMin recovery ending loc's timeline:   0\nBackup start location:                0/0\nBackup end location:                  0/0\nEnd-of-backup record required:        no\nwal_level setting:                    logical\nwal_log_hints setting:                on\nmax_connections setting:              100\nmax_worker_processes setting:         32\nmax_wal_senders setting:              10\nmax_prepared_xacts setting:           0\nmax_locks_per_xact setting:           64\ntrack_commit_timestamp setting:       off\nMaximum data alignment:               8\nDatabase block size:                  8192\nBlocks per segment of large relation: 131072\nWAL block size:                       8192\nBytes per WAL segment:                16777216\nMaximum length of identifiers:        64\nMaximum columns in an index:          32\nMaximum size of a TOAST chunk:        1996\nSize of a large-object chunk:         2048\nDate/time type storage:               64-bit integers\nFloat8 argument passing:              by value\nData page checksum version:           0\nMock authentication nonce:            13ae84d3ba48cd8460422b6a9fc252b17894eef4773f1a2ff3a05fb5e4b54434\n"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:34.949139972Z","msg":"renaming the data directory","pgdata":"/var/lib/postgresql/data/pgdata","pgwal":"","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","pgdataNewName":"/var/lib/postgresql/data/pgdata_20250625T212934Z"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:34.949238057Z","msg":"Restore through plugin detected, proceeding...","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.287441198Z","msg":"Creating new data directory","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","pgdata":"/controller/recovery/datadir_2403194427","initDbOptions":["--username","postgres","-D","/controller/recovery/datadir_2403194427","--no-sync"]}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.570146013Z","logger":"initdb","msg":"The files belonging to this database system will be owned by user \"postgres\".\nThis user must also own the server process.\n\nThe database cluster will be initialized with locale \"en_US.utf8\".\nThe default database encoding has accordingly been set to \"UTF8\".\nThe default text search configuration will be set to \"english\".\n\nData page checksums are disabled.\n\nfixing permissions on existing directory /controller/recovery/datadir_2403194427 ... ok\ncreating subdirectories ... ok\nselecting dynamic shared memory implementation ... posix\nselecting default \"max_connections\" ... 100\nselecting default \"shared_buffers\" ... 128MB\nselecting default time zone ... Etc/UTC\ncreating configuration files ... ok\nrunning bootstrap script ... ok\nperforming post-bootstrap initialization ... ok\n\nSync to disk skipped.\nThe data directory might become corrupt if the operating system crashes.\n\n\nSuccess. You can now start the database server using:\n\n    pg_ctl -D /controller/recovery/datadir_2403194427 -l logfile start\n\n","pipe":"stdout","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.570162518Z","logger":"initdb","msg":"initdb: warning: enabling \"trust\" authentication for local connections\ninitdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.\n","pipe":"stderr","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery"}
bootstrap-controller {"level":"info","ts":"2025-06-25T21:29:24.562730347Z","msg":"Installing the manager executable","destination":"/controller/manager","version":"1.26.0","build":{"Version":"1.26.0","Commit":"1535f3c17","Date":"2025-05-23"}}
bootstrap-controller {"level":"info","ts":"2025-06-25T21:29:24.61032836Z","msg":"Setting 0750 permissions"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.575370272Z","msg":"Installed configuration file","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","pgdata":"/controller/recovery/datadir_2403194427","filename":"pg_hba.conf"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.577681032Z","msg":"Installed configuration file","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","pgdata":"/controller/recovery/datadir_2403194427","filename":"pg_ident.conf"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.579956734Z","msg":"Installed configuration file","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","pgdata":"/controller/recovery/datadir_2403194427","filename":"custom.conf"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.599274699Z","msg":"Generated recovery configuration","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","configuration":"recovery_target_action = promote\nrestore_command = '/controller/manager wal-restore --log-destination /controller/log/postgres.json %f %p'\n\nrecovery_target_time = '2025-06-25 21:20:00.000000Z'\nrecovery_target_inclusive = true\n"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.604371668Z","msg":"Aligned PostgreSQL configuration to satisfy both pg_controldata and cluster spec","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","enforcedParams":{"max_connections":"100","max_locks_per_transaction":"64","max_prepared_transactions":"0","max_wal_senders":"10","max_worker_processes":"32"},"controldataParams":{"max_connections":100,"max_locks_per_transaction":64,"max_prepared_transactions":0,"max_wal_senders":10,"max_worker_processes":32},"clusterParams":{"max_worker_processes":32}}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.606469632Z","msg":"Starting up instance","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","pgdata":"/var/lib/postgresql/data/pgdata","options":["start","-w","-D","/var/lib/postgresql/data/pgdata","-o","-c port=5432 -c unix_socket_directories=/controller/run","-t 40000000","-o","-c listen_addresses='127.0.0.1'"]}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.621236886Z","logger":"pg_ctl","msg":"waiting for server to start....2025-06-25 21:29:39.617 UTC [30] LOG:  invalid value for parameter \"recovery_target_time\": \"2025-06-25 21:20:00.000000Z\"","pipe":"stdout","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery"}
plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:36.005687598Z","msg":"Downloading backup catalog","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery"}
plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:37.02634534Z","msg":"Downloaded backup catalog","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","backupCatalog":[{"backupID":"backup-20250625211845","startTime":"2025-06-25T21:18:46.7782Z","endTime":"2025-06-25T21:18:51.137598Z"}]}
plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:37.026502619Z","msg":"Target backup found","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","backup":{"backup_name":"backup-20250625211845","backup_label":"'START WAL LOCATION: 0/3000028 (file 000000010000000000000003)\\nCHECKPOINT LOCATION: 0/3000080\\nBACKUP METHOD: streamed\\nBACKUP FROM: primary\\nSTART TIME: 2025-06-25 21:18:48 UTC\\nLABEL: Barman backup cloud 20250625T211846\\nSTART TIMELINE: 1\\n'","begin_time":"Wed Jun 25 21:18:46 2025","end_time":"Wed Jun 25 21:18:51 2025","begin_time_iso":"2025-06-25T21:18:46.778200+00:00","end_time_iso":"2025-06-25T21:18:51.137598+00:00","BeginTime":"2025-06-25T21:18:46.7782Z","EndTime":"2025-06-25T21:18:51.137598Z","begin_wal":"000000010000000000000003","end_wal":"000000010000000000000003","begin_xlog":"0/3000028","end_xlog":"0/3000158","systemid":"7519999403269419028","backup_id":"20250625T211846","error":"","timeline":1}}
plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:38.056009276Z","msg":"Starting barman-cloud-restore","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","options":["--cloud-provider","aws-s3","--endpoint-url","http://10.50.0.2:9000","s3://test-cloudnative-pg/","my-great-cluster-deployment","20250625T211846","/var/lib/postgresql/data/pgdata"]}
plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:39.287058138Z","msg":"Restore completed","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery"}
bootstrap-controller {"level":"info","ts":"2025-06-25T21:29:24.610373749Z","msg":"Bootstrap completed"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.621255552Z","logger":"pg_ctl","msg":"2025-06-25 21:29:39.617 UTC [30] FATAL:  configuration file \"/var/lib/postgresql/data/pgdata/custom.conf\" contains errors","pipe":"stdout","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.712407784Z","logger":"pg_ctl","msg":"pg_ctl: could not start server","pipe":"stderr","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.712422807Z","logger":"pg_ctl","msg":"Examine the log output.","pipe":"stderr","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.71245538Z","logger":"pg_ctl","msg":" stopped waiting","pipe":"stdout","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.712485829Z","msg":"Exited log pipe","fileName":"/controller/log/postgres.csv","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery"}
full-recovery {"level":"error","ts":"2025-06-25T21:29:39.71249582Z","msg":"Error while restoring a backup","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","error":"while activating instance: error starting PostgreSQL instance: exit status 1","stacktrace":"github.com/cloudnative-pg/machinery/pkg/log.(*logger).Error\n\tpkg/mod/github.com/cloudnative-pg/machinery@v0.2.0/pkg/log/log.go:125\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/instance/restore.restoreSubCommand\n\tinternal/cmd/manager/instance/restore/restore.go:79\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/instance/restore.(*restoreRunnable).Start\n\tinternal/cmd/manager/instance/restore/restore.go:62\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.20.4/pkg/manager/runnable_group.go:226"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.712560956Z","msg":"Stopping and waiting for non leader election runnables"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.712571176Z","msg":"Stopping and waiting for leader election runnables"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.712613192Z","msg":"Webserver exited","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","address":"localhost:8010"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.712621119Z","msg":"Stopping and waiting for caches"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.712736902Z","msg":"pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.Cluster ended with: an error on the server (\"unable to decode an event from the watch stream: context canceled\") has prevented the request from succeeding"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.712765873Z","msg":"Stopping and waiting for webhooks"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.712772131Z","msg":"Stopping and waiting for HTTP servers"}
full-recovery {"level":"info","ts":"2025-06-25T21:29:39.712774588Z","msg":"Wait completed, proceeding to shutdown the manager"}
full-recovery {"level":"error","ts":"2025-06-25T21:29:39.712780101Z","msg":"restore error","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","error":"while restoring cluster: while activating instance: error starting PostgreSQL instance: exit status 1","stacktrace":"github.com/cloudnative-pg/machinery/pkg/log.(*logger).Error\n\tpkg/mod/github.com/cloudnative-pg/machinery@v0.2.0/pkg/log/log.go:125\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/instance/restore.NewCmd.func1\n\tinternal/cmd/manager/instance/restore/cmd.go:101\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.9.1/command.go:1015\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.9.1/command.go:1148\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.9.1/command.go:1071\nmain.main\n\tcmd/manager/main.go:71\nruntime.main\n\t/opt/hostedtoolcache/go/1.24.3/x64/src/runtime/proc.go:283"}
plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:39.287076712Z","msg":"sending restore response","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","config":"recovery_target_action = promote\nrestore_command = '/controller/manager wal-restore --log-destination /controller/log/postgres.json %f %p'\n","env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","HOSTNAME=my-great-cluster-deployment-restored-1-full-recovery","SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt","LANG=C.UTF-8","SUMMARY=CloudNativePG Barman plugin","DESCRIPTION=Container image that provides the barman-cloud sidecar","CLUSTER_NAME=my-great-cluster-deployment-restored","PGPORT=5432","PGHOST=/controller/run","TMPDIR=/controller/tmp","CUSTOM_CNPG_VERSION=v1","PGDATA=/var/lib/postgresql/data/pgdata","POD_NAME=my-great-cluster-deployment-restored-1-full-recovery","SPOOL_DIRECTORY=/controller/wal-restore-spool","CUSTOM_CNPG_GROUP=postgresql.cnpg.io","NAMESPACE=cloudnative-pg","PSQL_HISTORY=/controller/tmp/.psql_history","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_RW_SERVICE_PORT_POSTGRES=5432","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_RW_PORT_5432_TCP_ADDR=10.43.185.140","KUBERNETES_PORT_443_TCP=tcp://10.43.0.1:443","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_R_PORT_5432_TCP_ADDR=10.43.46.43","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_R_PORT_5432_TCP_PORT=5432","KUBERNETES_SERVICE_PORT=443","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_R_PORT_5432_TCP=tcp://10.43.46.43:5432","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_RW_SERVICE_HOST=10.43.185.140","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_RW_PORT_5432_TCP_PORT=5432","KUBERNETES_SERVICE_HOST=10.43.0.1","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_RO_PORT_5432_TCP=tcp://10.43.160.253:5432","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_R_SERVICE_PORT_POSTGRES=5432","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_RO_SERVICE_HOST=10.43.160.253","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_RO_PORT_5432_TCP_PROTO=tcp","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_RO_PORT_5432_TCP_ADDR=10.43.160.253","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_RW_PORT=tcp://10.43.185.140:5432","KUBERNETES_PORT_443_TCP_PROTO=tcp","KUBERNETES_PORT_443_TCP_PORT=443","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_R_SERVICE_PORT=5432","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_R_PORT=tcp://10.43.46.43:5432","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_RO_SERVICE_PORT_POSTGRES=5432","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_RO_PORT=tcp://10.43.160.253:5432","KUBERNETES_PORT=tcp://10.43.0.1:443","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_R_SERVICE_HOST=10.43.46.43","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_RO_PORT_5432_TCP_PORT=5432","KUBERNETES_SERVICE_PORT_HTTPS=443","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_RO_SERVICE_PORT=5432","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_RW_PORT_5432_TCP_PROTO=tcp","KUBERNETES_PORT_443_TCP_ADDR=10.43.0.1","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_RW_PORT_5432_TCP=tcp://10.43.185.140:5432","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_RW_SERVICE_PORT=5432","MY_GREAT_CLUSTER_DEPLOYMENT_RESTORED_R_PORT_5432_TCP_PROTO=tcp","HOME=/","AWS_ACCESS_KEY_ID=AH0aqvvfXgXq4ZOCdwja","AWS_SECRET_ACCESS_KEY=NxZsif8zFSAFtRCBszFP0z2JLUjFKQbjQ7DKlND6"]}
plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:40.708935332Z","msg":"Stopping and waiting for non leader election runnables"}
plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:40.708962151Z","msg":"Stopping and waiting for leader election runnables"}
plugin-barman-cloud {"level":"error","ts":"2025-06-25T21:29:40.709078747Z","msg":"While terminating server","logging_pod":"my-great-cluster-deployment-restored-1-full-recovery","stacktrace":"github.com/cloudnative-pg/machinery/pkg/log.(*logger).Error\n\t/go/pkg/mod/github.com/cloudnative-pg/machinery@v0.2.0/pkg/log/log.go:125\ngithub.com/cloudnative-pg/cnpg-i-machinery/pkg/pluginhelper/http.(*Server).Start\n\t/go/pkg/mod/github.com/cloudnative-pg/cnpg-i-machinery@v0.3.0/pkg/pluginhelper/http/server.go:222\ngithub.com/cloudnative-pg/plugin-barman-cloud/internal/cnpgi/restore.(*CNPGI).Start\n\t/workspace/internal/cnpgi/restore/start.go:58\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.21.0/pkg/manager/runnable_group.go:226"}
plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:40.709121112Z","msg":"Stopping and waiting for caches"}
plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:40.709130375Z","msg":"Stopping and waiting for webhooks"}
plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:40.709137005Z","msg":"Stopping and waiting for HTTP servers"}
plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:40.709195593Z","logger":"controller-runtime.metrics","msg":"Shutting down metrics server with timeout of 1 minute"}
plugin-barman-cloud {"level":"info","ts":"2025-06-25T21:29:40.709239423Z","msg":"Wait completed, proceeding to shutdown the manager"}

Code of Conduct

  • I agree to follow this project's Code of Conduct

Metadata

Metadata

Assignees

Labels

triagePending triage

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions