Temporary Performance Degradation on DE1
IF YOU HAVE BEEN AFFECTED AND HAVEN'T RECEIVED A 4 DAY EXTENSION PLEASE REACH OUT TO OUR SUPPORT TEAM VIA LIVECHAT OR DISCORD
Our system is currently performing a RAID 1 resync.
During this process, some operations may be slower than usual.
Normal performance is expected once the resync is complete.
Thank you for your patience.
DE1 Maintanance on 27 Mar 2026 18:23:50 (UTC+02:00)
Monitoring on 27 Mar 2026 20:46:22 (UTC+02:00)
Our system is currently performing a RAID 1 resync.
During this process, some operations may be slower than usual.
Normal performance is expected once the resync is complete.
Thank you for your patience.
Our system is currently performing a RAID 1 resync.
During this process, some operations may be slower than usual.
Normal performance is expected once the resync is complete.
Thank you for your patience.
Update on 27 Mar 2026 19:22:57 (UTC+02:00)
Drive has been successfully cloned, will have to wait on the remote hands form the datacenter to remove the old drives and boot up the server again, sorry for the additional delay.
Drive has been successfully cloned, will have to wait on the remote hands form the datacenter to remove the old drives and boot up the server again, sorry for the additional delay.
Update on 27 Mar 2026 18:58:14 (UTC+02:00)
So this maintenance was supposed to be a quick server stop, drive cloning to bigger and newer drives then back up, this ended up taking longer due to Remote Hands from the datacenter being late from when we scheduled them.
Got, a swap for the defective 4TB NVMe drive, Cloning the raid array.
Eta to completion 30-40minutes.
So this maintenance was supposed to be a quick server stop, drive cloning to bigger and newer drives then back up, this ended up taking longer due to Remote Hands from the datacenter being late from when we scheduled them.
Got, a swap for the defective 4TB NVMe drive, Cloning the raid array.
Eta to completion 30-40minutes.
In progress on 27 Mar 2026 18:23:50 (UTC+02:00)
So this maintenance was supposed to be a quick server stop, drive cloning to bigger and newer drives then back up, this ended up taking longer due to Remote Hands from the datacenter being late from when we scheduled them.
Quick update: One of the initial 4TB NVMe drives failed our hardware tests. We’ve pulled them to avoid any potential RAID instability and are sourcing a replacement set now. We're prioritizing a clean, stable migration over a rushed one. More info to follow shortly.
So this maintenance was supposed to be a quick server stop, drive cloning to bigger and newer drives then back up, this ended up taking longer due to Remote Hands from the datacenter being late from when we scheduled them.
Quick update: One of the initial 4TB NVMe drives failed our hardware tests. We’ve pulled them to avoid any potential RAID instability and are sourcing a replacement set now. We're prioritizing a clean, stable migration over a rushed one. More info to follow shortly.
