Skip to content

Prometheus v2.52.0 raises "Error on ingesting samples with different value but same timestamp" for kube-state-metrics #14089

@rgarcia89

Description

@rgarcia89

What did you do?

Hello,

With the update to Prometheus v2.52.0 (https://github.com/prometheus/prometheus/releases/tag/v2.52.0), an error has been introduced in Prometheus logging, indicating duplicated samples from kube-state-metrics.

Consequently, it has triggered the following rule, which is part of the Prometheus Operator's kube-prometheus (https://github.com/prometheus-operator/kube-prometheus) project, which I use for deploying the monitoring environment:

- alert: PrometheusDuplicateTimestamps
  annotations:
    description: Prometheus {{$labels.namespace}}/{{$labels.pod}} is dropping {{ printf "%.4g" $value  }} samples/s with different values but duplicated timestamp.
    runbook_url: https://runbooks.prometheus-operator.dev/runbooks/prometheus/prometheusduplicatetimestamps
    summary: Prometheus is dropping samples with duplicate timestamps.
  expr: |
    rate(prometheus_target_scrapes_sample_duplicate_timestamp_total{job=~"prometheus.*",namespace="monitoring"}[5m]) > 0
  for: 10m
  labels:
    severity: warning

I don't see any duplicates in these metrics, which raises the question of why the scrape manager is reporting an issue.
kube-state-metric /metrics

# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 3.1373e-05
go_gc_duration_seconds{quantile="0.25"} 4.3932e-05
go_gc_duration_seconds{quantile="0.5"} 5.7374e-05
go_gc_duration_seconds{quantile="0.75"} 7.8606e-05
go_gc_duration_seconds{quantile="1"} 0.091648893
go_gc_duration_seconds_sum 0.348674908
go_gc_duration_seconds_count 143
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 127
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.21.8"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 9.426192e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 1.10105044e+09
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.608599e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 7.005798e+06
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 6.29496e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 9.426192e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 8.8285184e+07
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 1.5163392e+07
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 41144
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 8.1780736e+07
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 1.03448576e+08
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.7155965721111138e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 7.046942e+06
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 2400
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 15600
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 268968
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 619248
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 1.8220024e+07
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 628721
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 1.409024e+06
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 1.409024e+06
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 1.14024728e+08
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 9
# HELP http_request_duration_seconds A histogram of requests for kube-state-metrics metrics handler.
# TYPE http_request_duration_seconds histogram
http_request_duration_seconds_bucket{handler="metrics",method="get",le="0.005"} 12
http_request_duration_seconds_bucket{handler="metrics",method="get",le="0.01"} 183
http_request_duration_seconds_bucket{handler="metrics",method="get",le="0.025"} 184
http_request_duration_seconds_bucket{handler="metrics",method="get",le="0.05"} 185
http_request_duration_seconds_bucket{handler="metrics",method="get",le="0.1"} 186
http_request_duration_seconds_bucket{handler="metrics",method="get",le="0.25"} 186
http_request_duration_seconds_bucket{handler="metrics",method="get",le="0.5"} 186
http_request_duration_seconds_bucket{handler="metrics",method="get",le="1"} 186
http_request_duration_seconds_bucket{handler="metrics",method="get",le="2.5"} 186
http_request_duration_seconds_bucket{handler="metrics",method="get",le="5"} 186
http_request_duration_seconds_bucket{handler="metrics",method="get",le="10"} 186
http_request_duration_seconds_bucket{handler="metrics",method="get",le="+Inf"} 186
http_request_duration_seconds_sum{handler="metrics",method="get"} 1.295945279
http_request_duration_seconds_count{handler="metrics",method="get"} 186
# HELP kube_state_metrics_build_info A metric with a constant '1' value labeled by version, revision, branch, goversion from which kube_state_metrics was built, and the goos and goarch for the build.
# TYPE kube_state_metrics_build_info gauge
kube_state_metrics_build_info{branch="",goarch="amd64",goos="linux",goversion="go1.21.8",revision="unknown",tags="unknown",version="v2.12.0"} 1
# HELP kube_state_metrics_custom_resource_state_add_events_total Number of times that the CRD informer triggered the add event.
# TYPE kube_state_metrics_custom_resource_state_add_events_total counter
kube_state_metrics_custom_resource_state_add_events_total 0
# HELP kube_state_metrics_custom_resource_state_cache Net amount of CRDs affecting the cache currently.
# TYPE kube_state_metrics_custom_resource_state_cache gauge
kube_state_metrics_custom_resource_state_cache 0
# HELP kube_state_metrics_custom_resource_state_delete_events_total Number of times that the CRD informer triggered the remove event.
# TYPE kube_state_metrics_custom_resource_state_delete_events_total counter
kube_state_metrics_custom_resource_state_delete_events_total 0
# HELP kube_state_metrics_list_total Number of total resource list in kube-state-metrics
# TYPE kube_state_metrics_list_total counter
kube_state_metrics_list_total{resource="*v1.CertificateSigningRequest",result="success"} 1
kube_state_metrics_list_total{resource="*v1.ConfigMap",result="success"} 1
kube_state_metrics_list_total{resource="*v1.CronJob",result="success"} 1
kube_state_metrics_list_total{resource="*v1.DaemonSet",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Deployment",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Endpoints",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Ingress",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Job",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Lease",result="success"} 1
kube_state_metrics_list_total{resource="*v1.LimitRange",result="success"} 1
kube_state_metrics_list_total{resource="*v1.MutatingWebhookConfiguration",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Namespace",result="success"} 1
kube_state_metrics_list_total{resource="*v1.NetworkPolicy",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Node",result="success"} 1
kube_state_metrics_list_total{resource="*v1.PersistentVolume",result="success"} 1
kube_state_metrics_list_total{resource="*v1.PersistentVolumeClaim",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Pod",result="success"} 1
kube_state_metrics_list_total{resource="*v1.PodDisruptionBudget",result="success"} 1
kube_state_metrics_list_total{resource="*v1.ReplicaSet",result="success"} 1
kube_state_metrics_list_total{resource="*v1.ReplicationController",result="success"} 1
kube_state_metrics_list_total{resource="*v1.ResourceQuota",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Secret",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Service",result="success"} 1
kube_state_metrics_list_total{resource="*v1.StatefulSet",result="success"} 1
kube_state_metrics_list_total{resource="*v1.StorageClass",result="success"} 1
kube_state_metrics_list_total{resource="*v1.ValidatingWebhookConfiguration",result="success"} 1
kube_state_metrics_list_total{resource="*v1.VolumeAttachment",result="success"} 1
kube_state_metrics_list_total{resource="*v2.HorizontalPodAutoscaler",result="success"} 1
# HELP kube_state_metrics_shard_ordinal Current sharding ordinal/index of this instance
# TYPE kube_state_metrics_shard_ordinal gauge
kube_state_metrics_shard_ordinal{shard_ordinal="0"} 0
# HELP kube_state_metrics_total_shards Number of total shards this instance is aware of
# TYPE kube_state_metrics_total_shards gauge
kube_state_metrics_total_shards 1
# HELP kube_state_metrics_watch_total Number of total resource watches in kube-state-metrics
# TYPE kube_state_metrics_watch_total counter
kube_state_metrics_watch_total{resource="*v1.CertificateSigningRequest",result="success"} 13
kube_state_metrics_watch_total{resource="*v1.ConfigMap",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.CronJob",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.DaemonSet",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.Deployment",result="success"} 13
kube_state_metrics_watch_total{resource="*v1.Endpoints",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.Ingress",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.Job",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.Lease",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.LimitRange",result="success"} 13
kube_state_metrics_watch_total{resource="*v1.MutatingWebhookConfiguration",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.Namespace",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.NetworkPolicy",result="success"} 13
kube_state_metrics_watch_total{resource="*v1.Node",result="success"} 12
kube_state_metrics_watch_total{resource="*v1.PersistentVolume",result="success"} 13
kube_state_metrics_watch_total{resource="*v1.PersistentVolumeClaim",result="success"} 13
kube_state_metrics_watch_total{resource="*v1.Pod",result="success"} 12
kube_state_metrics_watch_total{resource="*v1.PodDisruptionBudget",result="success"} 13
kube_state_metrics_watch_total{resource="*v1.ReplicaSet",result="success"} 12
kube_state_metrics_watch_total{resource="*v1.ReplicationController",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.ResourceQuota",result="success"} 13
kube_state_metrics_watch_total{resource="*v1.Secret",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.Service",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.StatefulSet",result="success"} 12
kube_state_metrics_watch_total{resource="*v1.StorageClass",result="success"} 13
kube_state_metrics_watch_total{resource="*v1.ValidatingWebhookConfiguration",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.VolumeAttachment",result="success"} 15
kube_state_metrics_watch_total{resource="*v2.HorizontalPodAutoscaler",result="success"} 14
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 15.64
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 12
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 8.7568384e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.71559079044e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.36824832e+09
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19

What did you expect to see?

No response

What did you see instead? Under which circumstances?

see logs area

System information

No response

Prometheus version

v2.52.0

Prometheus configuration file

No response

Alertmanager version

No response

Alertmanager configuration file

No response

Logs

ts=2024-05-13T08:34:50.177Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
ts=2024-05-13T08:34:50.177Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
ts=2024-05-13T08:34:50.186Z caller=kubernetes.go:331 level=info component="discovery manager scrape" discovery=kubernetes config=serviceMonitor/gitlab-runner/gitlab-runner/0 msg="Using pod service account via in-cluster config"
...
ts=2024-05-13T08:34:50.192Z caller=kubernetes.go:331 level=info component="discovery manager notify" discovery=kubernetes config=config-0 msg="Using pod service account via in-cluster config"
ts=2024-05-13T08:34:50.197Z caller=klog.go:124 level=error component=k8s_client_runtime func=Errorf msg="Unexpected error when reading response body: context canceled"
ts=2024-05-13T08:34:50.215Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml totalDuration=38.200371ms db_storage=1.931µs remote_storage=107.309µs web_handler=601ns query_engine=967ns scrape=94.875µs scrape_sd=5.973043ms notify=14.947µs notify_sd=312.05µs rules=22.883265ms tracing=5.768µs
ts=2024-05-13T08:34:52.704Z caller=dedupe.go:112 component=remote level=info remote_name=18d395 url=https://prometheus-lab.net/api/v1/write msg="Done replaying WAL" duration=2.55382309s
ts=2024-05-13T08:35:13.709Z caller=scrape.go:1738 level=warn component="scrape manager" scrape_pool=serviceMonitor/monitoring/kube-state-metrics/0 target=https://10.244.1.205:8443/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=1
ts=2024-05-13T08:35:43.437Z caller=scrape.go:1738 level=warn component="scrape manager" scrape_pool=serviceMonitor/monitoring/kube-state-metrics/0 target=https://10.244.1.205:8443/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=1

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions