Is there an existing issue already for this bug?
I have read the troubleshooting guide
I am running a supported version of CloudNativePG
Contact Details
No response
Version
1.24.0
What version of Kubernetes are you using?
1.30
What is your Kubernetes environment?
Cloud: Google GKE
How did you install the operator?
YAML manifest
What happened?
I enabled the podMonitor and got a bug similar to prometheus/prometheus#14089 .
I was able to workaround the issue by creating a PodMonitor manually (not the one created by the operator) and setting honorTimestamps: false.
It seems that Prometheus thinks that the metrics are reporting the same timestamp and then it (v2.52+) refuses to accept the scrape.
Cluster resource
Relevant log output
Code of Conduct