-
Notifications
You must be signed in to change notification settings - Fork 632
Description
Is there an existing issue already for this bug?
- I have searched for an existing issue, and could not find anything. I believe this is a new bug.
I have read the troubleshooting guide
- I have read the troubleshooting guide and I think this is a new bug.
I am running a supported version of CloudNativePG
- I have read the troubleshooting guide and I think this is a new bug.
Contact Details
No response
Version
trunk (main)
What version of Kubernetes are you using?
1.34
What is your Kubernetes environment?
Self-managed: kind (evaluation)
How did you install the operator?
YAML manifest
What happened?
The E2E test "Declarative database management / in a Namespace to be deleted manually / will not prevent the deletion of the namespace with lagging finalizers" fails due to a bug in the operator's error handling that causes Database finalizers to be orphaned when Cluster deletion encounters errors.
The operator has a bug in internal/controller/cluster_controller.go lines 165-182 where errors from notifyDeletionToOwnedResources() are logged but not returned due to variable shadowing:
cluster, err := r.getCluster(ctx, req)
if err != nil {
return ctrl.Result{}, err
}
if cluster == nil {
// err from deleteDanglingMonitoringQueries is scoped to the if block
if err := r.deleteDanglingMonitoringQueries(ctx, req.Namespace); err != nil {
contextLogger.Error(...)
}
// err from notifyDeletionToOwnedResources is scoped to the if block
if err := r.notifyDeletionToOwnedResources(ctx, req.NamespacedName); err != nil {
contextLogger.Error(...)
}
return ctrl.Result{}, err // Returns the original err from getCluster (nil)!
}When notifyDeletionToOwnedResources() fails (e.g., optimistic locking conflict when removing Database finalizers), the error is logged but not returned. Kubernetes sees "success" and doesn't requeue the reconciliation, leaving Database finalizers orphaned permanently.
Cluster resource
N/ARelevant log output
N/ACode of Conduct
- I agree to follow this project's Code of Conduct