osd/PG: async-recovery should respect historical missing objects#24004
osd/PG: async-recovery should respect historical missing objects#24004xiexingguo merged 2 commits intoceph:masterfrom
Conversation
src/common/options.cc
Outdated
| .set_description("Approximate missing objects above which to force auth_log_shard to be primary temporarily"), | ||
|
|
||
| Option("osd_async_recovery_min_pg_log_entries", Option::TYPE_UINT, Option::LEVEL_ADVANCED) | ||
| Option("osd_async_recovery_approx_missing_objects", Option::TYPE_UINT, Option::LEVEL_ADVANCED) |
There was a problem hiding this comment.
Not sure this name is appropriate, since this option now accounts for difference in length of logs plus missing objects.
There was a problem hiding this comment.
eh, what is your suggestion, then? @neha-ojha
There was a problem hiding this comment.
difference in length of logs
IMHO, the log difference is essentially a imprecise measure of missing objects, so I guess the naming should be fine? @neha-ojha
There was a problem hiding this comment.
I see it as a cost of recovery, more so because now it seems to be dependent on more than one parameter.
There was a problem hiding this comment.
What about:
Option("osd_async_recovery_min_cost", Option::TYPE_UINT, Option::LEVEL_ADVANCED)
set_description("A mixture measure of number of current log entries difference and historical missing objects, above which we switch to use asynchronous recovery when appropriate")6f1b996 to
f5b1a85
Compare
f5b1a85 to
f88a573
Compare
| if (auth_version > candidate_version) { | ||
| approx_missing_objects += auth_version - candidate_version; | ||
| } | ||
| if (approx_missing_objects > cct->_conf.get_val<uint64_t>( |
There was a problem hiding this comment.
@xiexingguo If num_objects_missing is reliable, this change should work. Wondering if you have done some evaluation like #23663 (comment), to compare how this change impacts the overall performance of async recovery.
Since this change is critical to how async recovery works in general, I'd like @jdurgin to review this as well.
Actually I do. Without this change the recovery process will cause a up to 80% decrease of client IOPS BTW, #22330 and #22664 dramatically reduce the chance whether or not a pg can go async recovery and hence the chance that the recovery process will unblock the client I/Os , I am also wondering
Sure. |
|
@jdurgin Ping? |
jdurgin
left a comment
There was a problem hiding this comment.
Including missing objects that we know about at this point in peering seems like a good idea. It's a bit more accurate at least, even if some objects may be more expensive than others to recover.
With respect to increasing availability by choosing more async recovery targets, I wonder if this could be achieved better by e.g. the balancer mgr module manipulating the up-set.
Trying to make the OSDs converge on mappings that aren't the up set will be tough, since that's what recovery is trying to achieve eventually. It's pretty easy to introduce bugs that way.
Peers with async-recovery enabled are usually having a update-to-date last-update iterator and hence might be moved out from the __async_recovery_targets__ set during the next peering circles. 7de3562 makes num_objects_missing trace historical missing objects correctly, hence we could take num_objects_missing into account when determing __async_recovery_targets__. Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
f88a573 to
90eda15
Compare
guoracle report that: > In the asynchronous recovery feature, the asynchronous recovery > target OSD is selected by last_updata.version, so that after the > peering is completed, the asynchronous recovery target OSDs update > the last_update.version, and then go down again, when the asynchronous > recovery target OSDs is back online, when peering,there is no pglog > difference between the asynchronous recovery targets and the > authoritative OSD, resulting in no asynchronous recovery. ceph#24004 aimed to solve the problem by persisting the number of missing objects into the disk when peering was done, and then we could take both new approximate missing objects (estimated according to last_update) and historical num_objects_missing into account when determining async_recovery_targets on any new follow-up peering circles. However, the above comment stands only if we could keep an up-to-date num_objects_missing field for each pg instance under any circumstances, which is unfortunately not true for replicas which have completed peering but never started recovery later (7de3562 make sure we'll update num_objects_missing for primary when peering is done, and will keep num_objects_missing up-to-update when each missing object is recovered). Note that guoracle also suggests to fix the same problem by using last_complete.version to calculate the pglog difference and update the last_complete of the asynchronous recovery target OSD in the copy of peer_info to the latest after the recovery is complete, which should not work well because we might reset last_complete to 0'0 whenever we trim pglog past the minimal need-version of missing set. Fix by persisting num_objects_missing for replicas correctly when peering is done. Fixes: https://tracker.ceph.com/issues/41924 Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
guoracle reported that: > In the asynchronous recovery feature, the asynchronous recovery > target OSD is selected by last_updata.version, so that after the > peering is completed, the asynchronous recovery target OSDs update > the last_update.version, and then go down again, when the asynchronous > recovery target OSDs is back online, when peering,there is no pglog > difference between the asynchronous recovery targets and the > authoritative OSD, resulting in no asynchronous recovery. ceph#24004 aimed to solve the problem by persisting the number of missing objects into the disk when peering was done, and then we could take both new approximate missing objects (estimated according to last_update) and historical num_objects_missing into account when determining async_recovery_targets on any new follow-up peering cycles. However, the above comment stands only if we could keep an up-to-date num_objects_missing field for each pg instance under any circumstances, which is unfortunately not true for replicas which have completed peering but never started recovery later (7de3562 make sure we'll update num_objects_missing for primary when peering is done, and will keep num_objects_missing up-to-update when each missing object is recovered). Note that guoracle also suggests to fix the same problem by using last_complete.version to calculate the pglog difference and update the last_complete of the asynchronous recovery target OSD in the copy of peer_info to the latest after the recovery is complete, which should not work well because we might reset last_complete to 0'0 whenever we trim pglog past the minimal need-version of missing set. Fix by persisting num_objects_missing for replicas correctly when peering is done. Fixes: https://tracker.ceph.com/issues/41924 Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
guoracle reported that: > In the asynchronous recovery feature, the asynchronous recovery > target OSD is selected by last_updata.version, so that after the > peering is completed, the asynchronous recovery target OSDs update > the last_update.version, and then go down again, when the asynchronous > recovery target OSDs is back online, when peering,there is no pglog > difference between the asynchronous recovery targets and the > authoritative OSD, resulting in no asynchronous recovery. ceph#24004 aimed to solve the problem by persisting the number of missing objects into the disk when peering was done, and then we could take both new approximate missing objects (estimated according to last_update) and historical num_objects_missing into account when determining async_recovery_targets on any new follow-up peering cycles. However, the above comment stands only if we could keep an up-to-date num_objects_missing field for each pg instance under any circumstances, which is unfortunately not true for replicas which have completed peering but never started recovery later (7de3562 make sure we'll update num_objects_missing for primary when peering is done, and will keep num_objects_missing up-to-update when each missing object is recovered). Note that guoracle also suggests to fix the same problem by using last_complete.version to calculate the pglog difference and update the last_complete of the asynchronous recovery target OSD in the copy of peer_info to the latest after the recovery is complete, which should not work well because we might reset last_complete to 0'0 whenever we trim pglog past the minimal need-version of missing set. Fix by persisting num_objects_missing for replicas correctly when peering is done. Fixes: https://tracker.ceph.com/issues/41924 Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
guoracle reported that: > In the asynchronous recovery feature, the asynchronous recovery > target OSD is selected by last_updata.version, so that after the > peering is completed, the asynchronous recovery target OSDs update > the last_update.version, and then go down again, when the asynchronous > recovery target OSDs is back online, when peering,there is no pglog > difference between the asynchronous recovery targets and the > authoritative OSD, resulting in no asynchronous recovery. ceph#24004 aimed to solve the problem by persisting the number of missing objects into the disk when peering was done, and then we could take both new approximate missing objects (estimated according to last_update) and historical num_objects_missing into account when determining async_recovery_targets on any new follow-up peering cycles. However, the above comment stands only if we could keep an up-to-date num_objects_missing field for each pg instance under any circumstances, which is unfortunately not true for replicas which have completed peering but never started recovery later (7de3562 make sure we'll update num_objects_missing for primary when peering is done, and will keep num_objects_missing up-to-update when each missing object is recovered). Note that guoracle also suggests to fix the same problem by using last_complete.version to calculate the pglog difference and update the last_complete of the asynchronous recovery target OSD in the copy of peer_info to the latest after the recovery is complete, which should not work well because we might reset last_complete to 0'0 whenever we trim pglog past the minimal need-version of missing set. Fix by persisting num_objects_missing for replicas correctly when peering is done. Fixes: https://tracker.ceph.com/issues/41924 Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn> (cherry picked from commit 3b024c5)
guoracle reported that: > In the asynchronous recovery feature, the asynchronous recovery > target OSD is selected by last_updata.version, so that after the > peering is completed, the asynchronous recovery target OSDs update > the last_update.version, and then go down again, when the asynchronous > recovery target OSDs is back online, when peering,there is no pglog > difference between the asynchronous recovery targets and the > authoritative OSD, resulting in no asynchronous recovery. ceph/ceph#24004 aimed to solve the problem by persisting the number of missing objects into the disk when peering was done, and then we could take both new approximate missing objects (estimated according to last_update) and historical num_objects_missing into account when determining async_recovery_targets on any new follow-up peering cycles. However, the above comment stands only if we could keep an up-to-date num_objects_missing field for each pg instance under any circumstances, which is unfortunately not true for replicas which have completed peering but never started recovery later (7de3562 make sure we'll update num_objects_missing for primary when peering is done, and will keep num_objects_missing up-to-update when each missing object is recovered). Note that guoracle also suggests to fix the same problem by using last_complete.version to calculate the pglog difference and update the last_complete of the asynchronous recovery target OSD in the copy of peer_info to the latest after the recovery is complete, which should not work well because we might reset last_complete to 0'0 whenever we trim pglog past the minimal need-version of missing set. Fix by persisting num_objects_missing for replicas correctly when peering is done. Fixes: https://tracker.ceph.com/issues/41924 Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn> (cherry picked from commit 3b024c5)
guoracle reported that: > In the asynchronous recovery feature, the asynchronous recovery > target OSD is selected by last_updata.version, so that after the > peering is completed, the asynchronous recovery target OSDs update > the last_update.version, and then go down again, when the asynchronous > recovery target OSDs is back online, when peering,there is no pglog > difference between the asynchronous recovery targets and the > authoritative OSD, resulting in no asynchronous recovery. ceph/ceph#24004 aimed to solve the problem by persisting the number of missing objects into the disk when peering was done, and then we could take both new approximate missing objects (estimated according to last_update) and historical num_objects_missing into account when determining async_recovery_targets on any new follow-up peering cycles. However, the above comment stands only if we could keep an up-to-date num_objects_missing field for each pg instance under any circumstances, which is unfortunately not true for replicas which have completed peering but never started recovery later (7de3562 make sure we'll update num_objects_missing for primary when peering is done, and will keep num_objects_missing up-to-update when each missing object is recovered). Note that guoracle also suggests to fix the same problem by using last_complete.version to calculate the pglog difference and update the last_complete of the asynchronous recovery target OSD in the copy of peer_info to the latest after the recovery is complete, which should not work well because we might reset last_complete to 0'0 whenever we trim pglog past the minimal need-version of missing set. Fix by persisting num_objects_missing for replicas correctly when peering is done. Fixes: https://tracker.ceph.com/issues/41924 Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn> (cherry picked from commit 3b024c5)
guoracle reported that: > In the asynchronous recovery feature, the asynchronous recovery > target OSD is selected by last_updata.version, so that after the > peering is completed, the asynchronous recovery target OSDs update > the last_update.version, and then go down again, when the asynchronous > recovery target OSDs is back online, when peering,there is no pglog > difference between the asynchronous recovery targets and the > authoritative OSD, resulting in no asynchronous recovery. ceph/ceph#24004 aimed to solve the problem by persisting the number of missing objects into the disk when peering was done, and then we could take both new approximate missing objects (estimated according to last_update) and historical num_objects_missing into account when determining async_recovery_targets on any new follow-up peering cycles. However, the above comment stands only if we could keep an up-to-date num_objects_missing field for each pg instance under any circumstances, which is unfortunately not true for replicas which have completed peering but never started recovery later (7de3562 make sure we'll update num_objects_missing for primary when peering is done, and will keep num_objects_missing up-to-update when each missing object is recovered). Note that guoracle also suggests to fix the same problem by using last_complete.version to calculate the pglog difference and update the last_complete of the asynchronous recovery target OSD in the copy of peer_info to the latest after the recovery is complete, which should not work well because we might reset last_complete to 0'0 whenever we trim pglog past the minimal need-version of missing set. Fix by persisting num_objects_missing for replicas correctly when peering is done. Fixes: https://tracker.ceph.com/issues/41924 Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn> (cherry picked from commit 3b024c5)
guoracle reported that: > In the asynchronous recovery feature, the asynchronous recovery > target OSD is selected by last_updata.version, so that after the > peering is completed, the asynchronous recovery target OSDs update > the last_update.version, and then go down again, when the asynchronous > recovery target OSDs is back online, when peering,there is no pglog > difference between the asynchronous recovery targets and the > authoritative OSD, resulting in no asynchronous recovery. ceph#24004 aimed to solve the problem by persisting the number of missing objects into the disk when peering was done, and then we could take both new approximate missing objects (estimated according to last_update) and historical num_objects_missing into account when determining async_recovery_targets on any new follow-up peering cycles. However, the above comment stands only if we could keep an up-to-date num_objects_missing field for each pg instance under any circumstances, which is unfortunately not true for replicas which have completed peering but never started recovery later (7de3562 make sure we'll update num_objects_missing for primary when peering is done, and will keep num_objects_missing up-to-update when each missing object is recovered). Note that guoracle also suggests to fix the same problem by using last_complete.version to calculate the pglog difference and update the last_complete of the asynchronous recovery target OSD in the copy of peer_info to the latest after the recovery is complete, which should not work well because we might reset last_complete to 0'0 whenever we trim pglog past the minimal need-version of missing set. Fix by persisting num_objects_missing for replicas correctly when peering is done. Fixes: https://tracker.ceph.com/issues/41924 Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn> (cherry picked from commit 3b024c5) (cherry picked from commit 8f645d0) Resolves: rhbz#1457536
guoracle reported that: > In the asynchronous recovery feature, the asynchronous recovery > target OSD is selected by last_updata.version, so that after the > peering is completed, the asynchronous recovery target OSDs update > the last_update.version, and then go down again, when the asynchronous > recovery target OSDs is back online, when peering,there is no pglog > difference between the asynchronous recovery targets and the > authoritative OSD, resulting in no asynchronous recovery. ceph/ceph#24004 aimed to solve the problem by persisting the number of missing objects into the disk when peering was done, and then we could take both new approximate missing objects (estimated according to last_update) and historical num_objects_missing into account when determining async_recovery_targets on any new follow-up peering cycles. However, the above comment stands only if we could keep an up-to-date num_objects_missing field for each pg instance under any circumstances, which is unfortunately not true for replicas which have completed peering but never started recovery later (7de3562 make sure we'll update num_objects_missing for primary when peering is done, and will keep num_objects_missing up-to-update when each missing object is recovered). Note that guoracle also suggests to fix the same problem by using last_complete.version to calculate the pglog difference and update the last_complete of the asynchronous recovery target OSD in the copy of peer_info to the latest after the recovery is complete, which should not work well because we might reset last_complete to 0'0 whenever we trim pglog past the minimal need-version of missing set. Fix by persisting num_objects_missing for replicas correctly when peering is done. Fixes: https://tracker.ceph.com/issues/41924 Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn> (cherry picked from commit 3b024c5)
Peers with async-recovery enabled are usually having a update-to-date
last-update iterator and hence might be moved out from the async_recovery_targets
set during the next peering circles.
7de3562 makes num_objects_missing
trace historical missing objects correctly, hence we could take
num_objects_missing into account when determing async_recovery_targets.
Signed-off-by: xie xingguo xie.xingguo@zte.com.cn