Replies: 10 comments 22 replies
-
|
I would suggest checking if this has been reported to Ceph via their Redmine tracker. If not, please report it there. I would hope this would be a high priority thing for them to investigate. https://tracker.ceph.com/projects/ceph/issues |
Beta Was this translation helpful? Give feedback.
-
|
Interesting observation... somewhere along the way this Replaced it with: And within minutes the |
Beta Was this translation helpful? Give feedback.
-
|
I've got rid of this constant warning by using this configuration |
Beta Was this translation helpful? Give feedback.
-
|
Same issue after upgrading to 19.2.3. |
Beta Was this translation helpful? Give feedback.
-
|
so what's the solution? |
Beta Was this translation helpful? Give feedback.
-
|
https://tracker.ceph.com/issues/62500 |
Beta Was this translation helpful? Give feedback.
-
|
Yes, we need to set |
Beta Was this translation helpful? Give feedback.
-
|
🧩 Fix for “OSD(s) Experiencing Slow Operations in BlueStore” I encountered the same issue after upgrading to Ceph. This warning usually doesn’t indicate failing hardware. It’s caused by BlueStore I/O contention and overly sensitive warning thresholds, especially in mixed HDD + SSD clusters. ✅ Recommended Configuration Then verify: ⚙️ Explanation Limit discard concurrency:
Enable discard only where it helps:
Adjust warning thresholds:
After applying these settings, the “OSD(s) experiencing slow operations in BlueStore” warning disappeared, and overall OSD latency stabilized. |
Beta Was this translation helpful? Give feedback.
-
|
I was also having this error and ran these commands and now I have 12 osds down, with this error. In a bit of a panic and a little help would be very apprecited. I am on ceph version 17.2.8 (dff5d121918807afac5d101ea65f4b00ad7b56d8) quincy (stable) |
Beta Was this translation helpful? Give feedback.
-
|
for ceph version v16.2.14,How to avoid it? @satoru-takeuchi |
Beta Was this translation helpful? Give feedback.



Uh oh!
There was an error while loading. Please reload this page.
-
rook-version:
v1.16.3ceph-version:
19.2.1-0K3S:
v1.31.5+k3s1Ubuntu:
24.04.2 LTSKernel:
6.8.0-51-genericCluster was stable with rook:
v1.16.3and ceph:19.2.0-0:Only upgraded was ceph to:
19.2.1-0, and I've been fighting off thisOSD(s) experiencing slow operations in BlueStoremesage since. Devices listed are NVMe. No issues seen withsmartctlornvme-cli. Over a few days the number of devices changes between 1 and 3, but tends to be the same 3.OSD.10 logs, keyword search for
slow(everything has a zero).OSD.9 logs (everything has a zero).:
OSD.6 logs (at least this has some debug messages about
slow operation):I'm not finding any documentation on how to troubleshoot this. Not seeing any messages about scrub issues or slow trimming. Restarting the OSD does resolve it for a few hours but it comes back.
Beta Was this translation helpful? Give feedback.
All reactions