-
Notifications
You must be signed in to change notification settings - Fork 632
Description
Is there an existing issue already for this bug?
- I have searched for an existing issue, and could not find anything. I believe this is a new bug.
I have read the troubleshooting guide
- I have read the troubleshooting guide and I think this is a new bug.
I am running a supported version of CloudNativePG
- I have read the troubleshooting guide and I think this is a new bug.
Contact Details
No response
Version
trunk (main)
What version of Kubernetes are you using?
1.34
What is your Kubernetes environment?
Self-managed: kind (evaluation)
How did you install the operator?
YAML manifest
What happened?
User-defined replication slots are not cleaned up from replica pods when both replicationSlots.highAvailability.enabled and replicationSlots.synchronizeReplicas.enabled are disabled.
This is the root cause of the flaky E2E test in #8968, which was tentatively addressed in #8971 by increasing the timeout; however, the actual bug was not resolved.
Root Cause
When replication slots are disabled, the Replicator stops without performing cleanup:
if config == nil || !config.GetEnabled() {
ticker.Stop()
updateInterval = 0
continue // ← BUG: No cleanup happens
}When both features are disabled, config.GetEnabled() returns false, causing the loop to continue without calling sr.reconcile(). The cleanup logic in synchronizeReplicationSlots() is never executed.
Cluster resource
Relevant log output
No errors logged - the Replicator silently stops without cleanup.Code of Conduct
- I agree to follow this project's Code of Conduct
Reactions are currently unavailable