osd/scrub: use separate chunk size configuration for shallow scrubs#44749
osd/scrub: use separate chunk size configuration for shallow scrubs#44749
Conversation
|
seems like a good idea, would like to see perf tests to ensure the new defaults for shallow scrub don't impact availability |
|
This pull request can no longer be automatically merged: a rebase is needed and changes have to be manually resolved |
b31c28a to
3e92896
Compare
6ece222 to
47c78d0
Compare
Using the existing common default chunk size for scrubs that are not deep scrubs is wasteful: a high ratio of inter-OSD messages per chunk, while the actual OSD work per chunk is minimal. Signed-off-by: Ronen Friedman <rfriedma@redhat.com>
47c78d0 to
ffda641
Compare
|
scrub performance data collected for various Please note the |
|
Note that Mark has also tested how increasing the shallow scrubs chunk size affected the scrubs duration on an unloaded cluster (w/o client load). Scrubs were faster by about 2.5X. There was no adverse effect on client IOPs - it seemed as though the new scheduler did a great job in prioritizing the clients work. On a loaded system - using the suggested default values for the shallow chunks (100/50): shallow scrubs took half the time they took when using the existing shared chunk size. |
|
Merging based on my Teuthology runs. All failures were verified to be a result of the 'publish-stats' bug. |
Using the existing common default chunk size for scrubs that are
not deep-scrubs is wasteful: a high ratio of inter-OSD messages
per chunk, while the actual OSD work per chunk is minimal.
Signed-off-by: Ronen Friedman rfriedma@redhat.com