Skip to content

[Bug]: Errors scaling up after an in-place major upgrade when VolumeSnapshots backup are available #7705

@mnencia

Description

@mnencia

Is there an existing issue already for this bug?

  • I have searched for an existing issue, and could not find anything. I believe this is a new bug.

I have read the troubleshooting guide

  • I have read the troubleshooting guide and I think this is a new bug.

I am running a supported version of CloudNativePG

  • I have read the troubleshooting guide and I think this is a new bug.

Contact Details

No response

Version

1.26 (latest patch)

What version of Kubernetes are you using?

1.33

What is your Kubernetes environment?

Self-managed: kind (evaluation)

How did you install the operator?

YAML manifest

What happened?

When valid volume snapshot backups are present, after a successful major version upgrade, the creation of the replicas tries to use the snapshots taken before the major upgrade, leading to a failure:

{
  "level": "info",
  "ts": "2025-05-29T20:16:42.497601464Z",
  "logger": "postgres",
  "msg": "2025-05-29 20:16:42 UTC FATAL:  database files are incompatible with server",
  "pipe": "stderr",
  "logging_pod": "cluster-example-4"
}
{
  "level": "info",
  "ts": "2025-05-29T20:16:42.497639304Z",
  "logger": "postgres",
  "msg": "2025-05-29 20:16:42 UTC DETAIL:  The data directory was initialized by PostgreSQL version 16, which is not compatible with this version 17.5.",
  "pipe": "stderr",
  "logging_pod": "cluster-example-4"
}

Cluster resource

Relevant log output

Code of Conduct

  • I agree to follow this project's Code of Conduct

Metadata

Metadata

Assignees

Labels

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions