Monitoring - Customers should no longer experience issues.
We will continue to monitor and provide updates.
Jan 16, 2026 - 02:27 UTC
Update - We are continuing to investigate this issue.
Jan 16, 2026 - 01:35 UTC
Investigating - Users may experience intermittent 5xx errors when writing metrics, though retries may eventually succeed, which can lead to delayed or missing data.
We continue to investigate and will update when we have more to share.
Jan 16, 2026 - 01:33 UTC
Monitoring - As of 00:28 UTC, we have observed improvement with the partial write outage. Customers should no longer experience issues with metrics ingestion.
We will continue to monitor and provide updates.
Jan 16, 2026 - 00:59 UTC
Investigating - As of 23:57 UTC, our engineers became aware of an issue with prod-us-west-0 resulting in a partial write outage. Users may experience intermittent 5xx errors when writing metrics, though retries may eventually succeed, which can lead to delayed or missing data.
We continue to investigate and will update when we have more to share.
Jan 16, 2026 - 00:28 UTC
Grafana Cloud: Hosted Grafana
Operational
AWS Australia - prod-ap-southeast-2
Operational
AWS Brazil - prod-sa-east-1
Operational
AWS Canada - prod-ca-east-0
Operational
AWS Germany - prod-eu-west-2
Operational
AWS Germany - prod-eu-west-4
Operational
AWS India - prod-ap-south-1
Operational
AWS Japan - prod-ap-northeast-0
Operational
AWS UAE - prod-me-central-1
Operational
AWS Singapore - prod-ap-southeast-1
Operational
AWS Sweden - prod-eu-north-0
Operational
AWS US East - prod-us-east-0
Operational
AWS US East - prod-us-east-2
Operational
AWS US West - prod-us-west-0
Operational
Azure Netherlands - prod-eu-west-3
Operational
Azure US Central - us-central2
Operational
GCP Australia - prod-au-southeast-0
Operational
GCP Belgium - prod-eu-west-0
Operational
GCP Brazil - prod-sa-east-0
Operational
GCP India - prod-ap-south-0
Operational
GCP Singapore - prod-ap-southeast-0
Operational
GCP UK - prod-gb-south-0
Operational
GCP US Central - prod-us-central-0
Operational
GCP US Central - prod-us-central-3
Operational
GCP US Central - prod-us-central-4
Operational
GCP US East - prod-us-east-1
Operational
play.grafana.org
Operational
AWS Australia - prod-au-southeast-1
Operational
AWS UK - prod-gb-south-1
Operational
Grafana Cloud: Graphite
Operational
AWS Australia - prod-ap-southeast-2: Querying
Operational
AWS Australia - prod-ap-southeast-2: Ingestion
Operational
AWS Brazil - prod-sa-east-1: Querying
Operational
AWS Brazil - prod-sa-east-1: Ingestion
Operational
We will perform planned maintenance to synthetic monitoring secrets on Wednesday January 21st, from 13:00 to 16:00 UTC, in the following regions: prod-us-central-0, prod-us-east-0, and prod-eu-west-2.
During maintenance synthetic monitoring checks using secrets will continue to run normally, but the secrets will be in a read-only state. Attempts to create/modify/delete secrets during maintenance will return an error until the maintenance is complete.
This maintenance is required to ensure the reliability of the secrets management system as we prepare for general availability of the feature. We will provide updates here as the maintenance progresses. Posted on
Jan 15, 2026 - 19:25 UTC
Completed -
The scheduled maintenance has been completed.
Jan 15, 12:00 UTC
Verifying -
Verification is currently underway for the maintenance items.
Jan 15, 11:20 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jan 15, 11:00 UTC
Scheduled -
Alert instances for Synthetic Monitoring ProbeFailedExecutionsTooHigh provisioned alert rule that are firing during the maintenance might resolve and fire again once in a ~1m time
Jan 15, 09:37 UTC
Resolved -
The scope of this incident was smaller than originally anticipated.
As of 16:27 UTC our engineering team merged a fix for those affected and we are considering this as resolved.
Jan 14, 20:17 UTC
Investigating -
We're experiencing an issue with connectivity loss for Azure PrivateLink endpoints in all available Azure regions. The issue affects users trying to ingest Alloy data or use PDC over Azure PrivateLink. Our team is actively investigating the issue for the root cause.
Jan 14, 14:30 UTC
Completed -
The scheduled maintenance has been completed.
Jan 14, 09:00 UTC
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jan 14, 07:00 UTC
Scheduled -
During the maintenance window, we will perform planned minor version upgrades on Grafana databases. Users may experience brief service interruptions lasting up to one minute. During this period, Grafana instances may become inaccessible. Other services are unaffected.
Jan 7, 10:02 UTC
Resolved -
We continue to observe a continued period of recovery. At this time, we are considering this issue resolved. No further updates.
Jan 12, 18:21 UTC
Monitoring -
Engineering has released a fix and as of 17:01 UTC, customers should no longer experience connectivity issues. We will continue to monitor for recurrence and provide updates accordingly.
Jan 12, 17:01 UTC
Identified -
Engineering has identified the issue and will be deploying a fix shortly. At this time, users will continue to experience disruptions for queries routed via PDC.
We will continue to provide updates as more information is shared.
Jan 12, 16:50 UTC
Investigating -
We are investigating an issue in prod-eu-west-3 where PDC agents are failing to maintain/re-establish connectivity. Some agents are struggling to reconnect, which may cause disruptions or degraded performance for customer queries routed over PDC. We’ll share updates as we learn more.
Jan 12, 15:44 UTC
Resolved -
Engineering has released a fix and we continue to observe a period of recovery. As of 15:12 UTC we are considering this resolved.
Jan 12, 15:26 UTC
Update -
There was a full degradation of write service between 9:13 UTC - 9:35 UTC. The cell is operational but there is still degradation in the write path. Our Engineering team is actively working on this.
Jan 12, 11:41 UTC
Update -
We are continuing to investigate this issue.
Jan 12, 09:09 UTC
Investigating -
We have been alerted to an issue with Tempo write degradation in prod-eu-west-3 - tempo-prod-08. The cell is operational but there is degradation in the write path. Write requests are taking longer than normal. This started at started 7:00 UTC. Our Engineering team is actively investigating this.
Jan 12, 09:03 UTC
Resolved -
Between 20:23 UTC and 20:53 UTC, Grafana Cloud Logs in prod-us-east-3 experienced a write degradation, which may have resulted in delayed or failed log ingestion for some customers.
The issue has been fully resolved, and the cell is currently operating normally. We are continuing to investigate the root cause and will provide additional details if relevant.
Jan 9, 20:30 UTC
Resolved -
There was a ~15 minute partial write outage for some customers in prod-us-central-0. The time frame for this outage was 15:43-15:57 UTC.
Jan 7, 17:41 UTC
Resolved -
This incident has been resolved.
Jan 6, 20:26 UTC
Monitoring -
We are seeing some recovery in affected products. We are continuing to monitor the progress.
Jan 6, 17:50 UTC
Investigating -
We are currently investigating an issue causing degraded Mimir and Tempo read performance in the prod-us-central-7 region.
Jan 6, 17:41 UTC
Resolved -
From 20:32 to 20:37 UTC, a DNS record misconfiguration resulted in temporary Cloudflare 1016 DNS errors on many Grafana Cloud stacks.
The misconfiguration was mitigated within 5 minutes, and we are working with Cloudflare to better understand why the particular misconfiguration resulted in this outage.
Jan 6, 15:09 UTC