Skip to content

Invalid metric type error for metric types containing capitals #4602

@benclapp

Description

@benclapp

Bug Report

What did you do?
Upgraded from 2.3.2 to 2.4.0

What did you expect to see?
New features, and existing targets still being scraped successfully.

What did you see instead? Under which circumstances?
It appears all targets that have any capitals in the metric type cause the scrape for this target to fail. For example:

Metric Type Scrape Success
counter Yes
Counter No
COUNTER No

I've scraped the same targets from a 2.3.2 Prometheus and they're being appended just fine. Unfortunately 2.4.0 cannot be rolled back. For reference, we're using this .NET Library.

Environment

  • System information:

    Running in docker, using this image: quay.io/prometheus/prometheus:v2.4.0

  • Prometheus version:

    prometheus, version 2.4.0 (branch: HEAD, revision: 068eaa5)
    build user: root@d84c15ea5e93
    build date: 20180911-10:46:37
    go version: go1.10.3

  • Logs:

level=warn ts=2018-09-13T06:10:37.93652152Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.5.181:9095/metrics msg="append failed" err="invalid metric type \"COUNTER\""
level=warn ts=2018-09-13T06:10:39.071841437Z caller=manager.go:430 component="rule manager" group=node.rules msg="Error on ingesting results from rule evaluation with different value but same timestamp" numDropped=4
level=warn ts=2018-09-13T06:10:40.851112422Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.4.115:9101/metrics msg="append failed" err="invalid metric type \"COUNTER\""
level=warn ts=2018-09-13T06:10:41.875560035Z caller=scrape.go:804 component="scrape manager" scrape_pool=custom_tls_scrape_targets target=https://<removed>/api/metrics msg="append failed" err="invalid metric type \"Gauge\""
level=warn ts=2018-09-13T06:10:42.917636496Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.5.181:9095/metrics msg="append failed" err="invalid metric type \"COUNTER\""
level=warn ts=2018-09-13T06:10:44.068456756Z caller=manager.go:430 component="rule manager" group=node.rules msg="Error on ingesting results from rule evaluation with different value but same timestamp" numDropped=4
level=warn ts=2018-09-13T06:10:45.85053895Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.4.115:9101/metrics msg="append failed" err="invalid metric type \"COUNTER\""
level=warn ts=2018-09-13T06:10:46.873520059Z caller=scrape.go:804 component="scrape manager" scrape_pool=custom_tls_scrape_targets target=https://<removed>/api/metrics msg="append failed" err="invalid metric type \"Gauge\""
level=warn ts=2018-09-13T06:10:47.919258931Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.5.181:9095/metrics msg="append failed" err="invalid metric type \"COUNTER\""
level=warn ts=2018-09-13T06:10:49.077866213Z caller=manager.go:430 component="rule manager" group=node.rules msg="Error on ingesting results from rule evaluation with different value but same timestamp" numDropped=4
level=warn ts=2018-09-13T06:10:50.851515285Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.4.115:9101/metrics msg="append failed" err="invalid metric type \"COUNTER\""
level=warn ts=2018-09-13T06:10:51.884877924Z caller=scrape.go:804 component="scrape manager" scrape_pool=custom_tls_scrape_targets target=https://<removed>/api/metrics msg="append failed" err="invalid metric type \"Gauge\""
level=warn ts=2018-09-13T06:10:52.916790359Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.5.181:9095/metrics msg="append failed" err="invalid metric type \"COUNTER\""
level=warn ts=2018-09-13T06:10:54.072958435Z caller=manager.go:430 component="rule manager" group=node.rules msg="Error on ingesting results from rule evaluation with different value but same timestamp" numDropped=4
level=warn ts=2018-09-13T06:10:55.851243421Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.4.115:9101/metrics msg="append failed" err="invalid metric type \"COUNTER\""
level=warn ts=2018-09-13T06:10:56.875002334Z caller=scrape.go:804 component="scrape manager" scrape_pool=custom_tls_scrape_targets target=https://<removed>/api/metrics msg="append failed" err="invalid metric type \"Gauge\""
level=warn ts=2018-09-13T06:10:57.933876143Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.5.181:9095/metrics msg="append failed" err="invalid metric type \"COUNTER\""
level=warn ts=2018-09-13T06:10:59.087553114Z caller=manager.go:430 component="rule manager" group=node.rules msg="Error on ingesting results from rule evaluation with different value but same timestamp" numDropped=4
level=warn ts=2018-09-13T06:11:00.850462158Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.4.115:9101/metrics msg="append failed" err="invalid metric type \"COUNTER\""
level=warn ts=2018-09-13T06:11:01.944988067Z caller=scrape.go:804 component="scrape manager" scrape_pool=custom_tls_scrape_targets target=https://<removed>/api/metrics msg="append failed" err="invalid metric type \"Gauge\""
level=warn ts=2018-09-13T06:11:02.916852138Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.5.181:9095/metrics msg="append failed" err="invalid metric type \"COUNTER\""

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions