Relevent telegraf.conf
[agent]
interval = "1m"
[[outputs.prometheus_client]]
listen = ":9273"
metric_version = 2
path = "/metrics"
expiration_interval = "30s"
export_timestamp = true
[[inputs.cloudwatch]]
region = "us-east-1"
period = "1m"
delay = "5m"
interval = "5m"
namespace = "AWS/RDS"
statistic_include = [ "average", "sum", "minimum", "maximum", "sample_count" ]
[[inputs.cloudwatch.metrics]]
names = ["CPUUtilization","DatabaseConnections","FreeStorageSpace"]
[[inputs.cloudwatch.metrics.dimensions]]
name = "DBInstanceIdentifier"
value = "*"
System info
1.20.3 Docker Image
Docker
No response
Steps to reproduce
- Ensure you're using telegraf 1.20.3
- Make a config file similar to the one above, in my experience it broke for all metrics.
- Ensure the location you're running the telegraf agent from has the necessary IAM permissions from AWS to query cloudwatch metrics
Expected behavior
The cloudwatch input should query and return all the available metrics and their corresponding time series values based on your conditions.
Actual behavior
All of the time series are returned like normal, except they all contain a value of 0.
Additional info
After doing a docker pull for the latest telegraf image and refreshing the container, all cloudwatch metrics went to 0. I noticed in the change log that there were some changes to the cloudwatch input recently. I refreshed telegraf last on October 12th, so the only changes to the cloudwatch input according to the changelog would be as a result of #9647.
I resolved my issue by specifying version 1.19 in order to avoid all the cloudwatch input changes over the past month or two. I attached a screenshot of a panel from one of our dashboards. Note the behavior is that the metric is still returned, its value is just 0. This happened for all cloudwatch metrics collected by telegraf.

Relevent telegraf.conf
System info
1.20.3 Docker Image
Docker
No response
Steps to reproduce
Expected behavior
The cloudwatch input should query and return all the available metrics and their corresponding time series values based on your conditions.
Actual behavior
All of the time series are returned like normal, except they all contain a value of 0.
Additional info
After doing a docker pull for the latest telegraf image and refreshing the container, all cloudwatch metrics went to 0. I noticed in the change log that there were some changes to the cloudwatch input recently. I refreshed telegraf last on October 12th, so the only changes to the cloudwatch input according to the changelog would be as a result of #9647.
I resolved my issue by specifying version 1.19 in order to avoid all the cloudwatch input changes over the past month or two. I attached a screenshot of a panel from one of our dashboards. Note the behavior is that the metric is still returned, its value is just 0. This happened for all cloudwatch metrics collected by telegraf.