Skip to content

[Feature]: Add support for ephemeral volumes/data loss on postgres instances #7566

@connorbradshaw10

Description

@connorbradshaw10

Is there an existing issue already for this feature request/idea?

  • I have searched for an existing issue, and could not find anything. I believe this is a new feature request to be evaluated.

What problem is this feature going to solve? Why should it be added?

I am trying to integrate Azure Container Storage to use CNPG. Azure Container Storage has a storage option that uses local NVMe disks to bring super high performance to DB workloads. However, the downside of local NVMe disks in Azure is that they are ephemeral. If a node is lost, we lose the data associated with any of the PVCs that were created on that node. In the case of CNPG, when we scale down or upgrade a node, the CNPG instance on that node is migrated to a different node. The underlying volume will be recreated fresh but the PVC/PV will not change. Whenever the CNPG instance starts back up, it expects /var/lib/postgresql/data/pgdata to exist. Since this directory does not exist anymore, the pod is stuck in a CrashLoopBackOff state.

Describe the solution you'd like

I am looking for a solution where CNPG can recognize complete data loss on an instance and rebuild it as if it were a new instance automatically or support backing the postgres data directory with a generic ephemeral volume.

Describe alternatives you've considered

There is a mitigation to run kubectl cnpg destroy on the instance that is stuck, but it would be nice to be able to automate this process.

Additional context

@eh8

Backport?

Yes

Are you willing to actively contribute to this feature?

Yes

Code of Conduct

  • I agree to follow this project's Code of Conduct

Metadata

Metadata

Projects

Status

Backlog

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions