All Systems Operational

Compute capacity Operational
ap-northeast-1 Operational
ap-northeast-2 Operational
ap-south-1 Operational
ap-southeast-1 Operational
ap-southeast-2 Operational
ca-central-1 Operational
eu-central-1 Operational
eu-central-2 Operational
eu-north-1 Operational
eu-west-1 Operational
eu-west-2 Operational
eu-west-3 Operational
sa-east-1 Operational
us-east-1 Operational
us-east-2 Operational
us-west-1 Operational
Analytics Operational
90 days ago
99.76 % uptime
Today
API Gateway Operational
90 days ago
99.82 % uptime
Today
Auth Operational
90 days ago
99.82 % uptime
Today
Connection Pooler Operational
90 days ago
99.81 % uptime
Today
Dashboard Operational
90 days ago
99.77 % uptime
Today
Database Operational
90 days ago
99.76 % uptime
Today
Edge Functions Operational
90 days ago
99.75 % uptime
Today
Management API Operational
90 days ago
99.75 % uptime
Today
Realtime Operational
90 days ago
99.82 % uptime
Today
Storage Operational
90 days ago
99.82 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
REST API Latency (Singapore)
Fetching
REST API Latency (North Virginia)
Fetching
REST API Latency (Frankfurt)
Fetching
Mar 27, 2026
Resolved - We have confirmed that project creation has returned to normal.
Mar 27, 18:43 UTC
Monitoring - Project creation has returned to normal. We continue to actively monitor and operate our platform for stability and consistency.
Mar 27, 18:18 UTC
Identified - We have experienced problems with project creation, the issue has been identified and mitigation steps have been implemented.
Mar 27, 17:58 UTC
Resolved - Log ingestion has returned to normal
Mar 27, 17:03 UTC
Monitoring - Our capacity mitigations have taken effect and ingestion has returned to normal. Customers should see logs returning to normal.
Mar 27, 16:17 UTC
Update - We continue to increase capacity in targeted areas to mitigate log ingestion degradation. It will take some time for the ingestion backlog to be eliminated.
Mar 27, 16:12 UTC
Update - We have implemented increase capacity and logs are returning to normal. It will take some time for the ingestion backlog to be eliminated. Some services, such as realtime, storage, and api-gateway, will have lost logs during the ingestion incident.
Mar 27, 15:52 UTC
Identified - We have identified the issue with log ingestion. We are testing mitigation options, including bringing up additional capacity. We will continue to update this page with our progress.
Mar 27, 15:33 UTC
Investigating - We are investigating a problem with log ingestion. Users may experience delay in current log data.
Mar 27, 15:13 UTC
Resolved - The issue has been resolved and Realtime services have returned to normal.
Mar 27, 02:58 UTC
Monitoring - The fix has been implemented and we’re monitoring to ensure stability.
Mar 27, 02:48 UTC
Update - We appreciate your patience as we continue to work on a fix. We will provide updates as they become available.
Mar 27, 01:44 UTC
Update - We are working on a fix and we’ll continue to provide updates as progress is made.
Mar 27, 01:13 UTC
Identified - We've identified the issue and are currently working on a fix, we will provide an update on our progress soon.
Mar 27, 00:46 UTC
Update - We're still investigating network connectivity issues with Realtime.
Mar 27, 00:20 UTC
Investigating - We are investigating network connectivity issues with Realtime.”
Mar 26, 23:41 UTC
Mar 26, 2026
Mar 25, 2026

No incidents reported.

Mar 24, 2026
Resolved - Branching access has returned to normal, this issue is resolved.
Mar 24, 15:56 UTC
Monitoring - The fix has been implemented and branching access has returned to normal.
Mar 24, 15:27 UTC
Identified - We are aware of an issue affecting users creating/updating branches. A permission error in our branching workflow is causing failures for all branches. We have identified the root cause and a fix is being deployed. We will provide an update once resolved.
Mar 24, 15:13 UTC
Mar 23, 2026
Resolved - This issue is now resolved. All logging ingestion has resumed in all regions.
Mar 23, 18:56 UTC
Monitoring - A fix has been implemented.
The system is now operating normally, and we will continue to monitor.

Mar 23, 17:05 UTC
Identified - The cause of the issue has been identified. Our engineering team is working on a fix.
Mar 23, 16:38 UTC
Investigating - We are currently investigating an issue affecting Supabase Logs, resulting in partial log ingestion for some services.
Users may experience delays or errors when accessing logs. In some cases, logs may be partially ingested or not ingested at all.
Projects remain fully functional. This issue is limited to logging only.

Our team is actively working to identify the root cause and restore normal performance as quickly as possible.
We will provide further updates as more information becomes available.

Mar 23, 16:28 UTC
Resolved - Error rates have returned to normal. This incident has been resolved.
Mar 23, 15:42 UTC
Monitoring - We carried out a controlled redeployment of the affected services to restore stability.
The system is now operating normally, and we will continue to monitor.

Mar 23, 14:32 UTC
Investigating - We are currently investigating elevated error rates affecting Edge Functions in the eu-central-1 (Frankfurt) region
Mar 23, 14:16 UTC
Mar 22, 2026

No incidents reported.

Mar 21, 2026

No incidents reported.

Mar 20, 2026

No incidents reported.

Mar 19, 2026
Resolved - Project creation in all regions are now successful.
Mar 19, 22:00 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Mar 19, 21:21 UTC
Identified - We have identified the issue and a fix is underway.
Mar 19, 21:08 UTC
Investigating - We are currently investigating project creation failures across multiple regions.
Mar 19, 20:55 UTC
Mar 18, 2026
Resolved - The fix was implemented. All impacted projects and newly created projects in the region are now successfully connecting to storage services.
Mar 18, 18:21 UTC
Identified - The root cause, involving task instances and batch sending events, has been identified. Our engineering team is working on a fix.
Mar 18, 17:53 UTC
Update - We are continuing to investigate elevated error rates affecting storage services in the ap-northeast-1 (Tokyo) region.
While we are still observing a higher-than-normal level of errors, initial findings indicate that the overall impact is lower than first expected. Many affected requests are succeeding upon retry.
Our team is actively working to determine the root cause and will provide further updates as more information becomes available.

Mar 18, 17:30 UTC
Update - We are continuing to investigate elevated error rates affecting storage services in the ap-northeast-1 (Tokyo) region.
Newly created projects may still experience issues connecting to storage, while existing projects remain unaffected at this time.
Our team is actively working to determine the root cause and assess the full impact.
We will provide further updates as more information becomes available.

Mar 18, 16:58 UTC
Investigating - We are currently investigating elevated error rates affecting storage services in the ap-northeast-1 (Tokyo) region.
Newly created projects may experience issues connecting to storage, while existing projects remain unaffected at this time.
We are actively working to understand the full scope of the impact and will provide further updates as more information becomes available.

Mar 18, 16:27 UTC
Mar 17, 2026
Resolved - This incident has been resolved.
Mar 17, 07:41 UTC
Monitoring - A fix has been implemented and we are monitoring.
Mar 17, 07:07 UTC
Identified - We have identified the root cause and are currently working on a fix.
Mar 17, 05:11 UTC
Investigating - Some custom Postgres configurations applied through the Supabase CLI are not currently taking effect for projects running Postgres version 17.6.1.084. Projects that do not use CLI-managed Postgres configurations, or that are running earlier Postgres versions, are not affected.
Mar 17, 04:50 UTC
Mar 16, 2026

No incidents reported.

Mar 15, 2026

No incidents reported.

Mar 14, 2026

No incidents reported.

Mar 13, 2026
Resolved - Error rates have returned to normal, and connectivity has stabilised.
Mar 13, 10:19 UTC
Monitoring - A fix has been implemented and error rates have reduced.
Our team is actively monitoring the system to ensure stability.

Mar 13, 09:40 UTC
Update - We continue to investigate increased error rates relating to the connection pooler in ap-northeast-2 (South Korea)
We will provide further updates as information becomes available.

Mar 13, 09:25 UTC
Investigating - We are currently investigating increased error rates relating to the connection pooler in ap-northeast-2 (South Korea)
We will provide further updates as information becomes available.

Mar 13, 08:54 UTC