Relevant telegraf.conf:
[agent]
collection_jitter = "10s"
debug = true
flush_interval = "60s"
flush_jitter = "10s"
hostname = "$HOSTNAME"
interval = "60s"
logfile = ""
metric_batch_size = 1000
metric_buffer_limit = 10000
omit_hostname = false
precision = ""
quiet = false
round_interval = true
[[outputs.file]]
files = ["results.txt"]
[[inputs.kube_inventory]]
namespace = ""
url = "https://kubernetes.default"
insecure_skip_verify = true
response_timeout = "10s"
resource_exclude = ["persistentvolumes", "persistentvolumeclaims"]
selector_include = []
selector_exclude = []
fielddrop = ["terminated_reason"]
System info:
Kubernetes v1.18.14
helm.sh/chart: telegraf-1.8.4
image: telegraf:1.19.1
Steps to reproduce:
- helm upgrade --install my-release influxdata/telegraf (https://artifacthub.io/packages/helm/influxdata/telegraf)
- Modify the telegraf deployment to include resource requests and limits
resources:
limits:
cpu: 4192m
memory: 4Gi
requests:
cpu: 4192m
memory: 4Gi
- Deploy the above telegraf configuration with the telegraf deployment
Expected behavior:
I expect to see correct outputs for all metrics generated by the kube_inventory input on results.txt
Actual behavior:
Some of the kube_inventory metrics report correctly, but not resource-based metrics (e.g. kubernetes_node.allocatable_*, kubernetes_node.capacity_*, kubernetes_pod_container.resource_*). In the pod logs, I see many lines that contain the following:
2021-07-21T16:37:58Z D! [inputs.kube_inventory] failed to parse quantity: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
2021-07-21T16:37:58Z D! [inputs.kube_inventory] failed to parse quantity: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
2021-07-21T16:37:58Z D! [inputs.kube_inventory] failed to parse quantity: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
Here is a sample line from results.txt:
readiness=ready,state=running resource_requests_memory_bytes=0i,resource_limits_millicpu_units=0i,resource_limits_memory_bytes=0i,restarts_total=8i,state_code=0i,resource_requests_millicpu_units=0i
Additional info:
I've tried different values for limits and requests, such as cpu: "4" or memory: 4000Mi with no luck.
Relevant telegraf.conf:
System info:
Kubernetes v1.18.14
helm.sh/chart: telegraf-1.8.4
image: telegraf:1.19.1
Steps to reproduce:
Expected behavior:
I expect to see correct outputs for all metrics generated by the kube_inventory input on results.txt
Actual behavior:
Some of the kube_inventory metrics report correctly, but not resource-based metrics (e.g. kubernetes_node.allocatable_*, kubernetes_node.capacity_*, kubernetes_pod_container.resource_*). In the pod logs, I see many lines that contain the following:
Here is a sample line from results.txt:
Additional info:
I've tried different values for limits and requests, such as
cpu: "4"ormemory: 4000Miwith no luck.