Skip to content

os, osd: readv#30061

Merged
xiexingguo merged 4 commits intoceph:masterfrom
xiexingguo:wip-sparse-read
Sep 7, 2019
Merged

os, osd: readv#30061
xiexingguo merged 4 commits intoceph:masterfrom
xiexingguo:wip-sparse-read

Conversation

@xiexingguo
Copy link
Member

Here is the background for the proposal: #29588

Checklist

  • References tracker ticket
  • Updates documentation if necessary
  • Includes tests for new functionality or reproducer for bug

Show available Jenkins commands
  • jenkins retest this please
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test dashboard backend
  • jenkins test docs
  • jenkins render docs

The typical read flow consists of 4 steps:
- read meta, e.g., extent-map
- read data from cache
- read cache-missing data off the disk
- decompress, verify checksum, and generate the final output stream

Hence we split _do_read into corresponding small helpers, which are
easy to maintain and for reuse.

Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
@xiexingguo
Copy link
Member Author

@liewegas the test result looks fine:

http://pulpito.ceph.com/xxg-2019-08-31_09:22:11-rados:thrash-wip-sparse-read-distro-basic-smithi/

the change set is actually small (the main big part is about the read func refactor) and should be easy to review

@liewegas
Copy link
Member

liewegas commented Sep 3, 2019

I like this much better!

At first I thought it was weird that you have to call fiemap first... but I guess the recovery caller we care about already has that info on hand? It also struck me as a bit funny that the result is all concatenated into a single return bufferlist. Given that, I think readv is actually a better name for this?

@xiexingguo
Copy link
Member Author

but I guess the recovery caller we care about already has that info on hand?

yeah. actually build_push_op might call fiemap once and then split the whole fiemap into several parts for transmitting, e.g., due to the max-size limit of single push op.

I think readv is actually a better name for this?

I agree. Repushed.

Currently only works for bluestore.
The default version still reads each extent separately synchronously,
but I think that should not be a concern?

Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
However, this is only meaningful for ReplicatedBackend
since ECBackend is already reading everything in asynchronous way.
Also the osd_verify_sparse_read_holes option could be reliably
dropped as it seems never (ever?) actually get a chance to be
exercised ?

Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
@xiexingguo xiexingguo changed the title os, osd: read sparse os, osd: readv Sep 4, 2019
@xiexingguo
Copy link
Member Author

retest this please

@xiexingguo xiexingguo merged commit 8c99f81 into ceph:master Sep 7, 2019
@xiexingguo xiexingguo deleted the wip-sparse-read branch September 7, 2019 02:34
req.regs.emplace_back(region_t(pos, b_off, l, front));
r2r.emplace_back(std::move(req));
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xiexingguo i'd suggest avoid massively reformatting source code in future.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry, will pay attention to next time

tchaikov added a commit to tchaikov/ceph that referenced this pull request Sep 9, 2019
after ceph#30061, extents returned by sparse read will always be
an empty map as long as the extents to be read is empty or the extent(s)
in it are empty, even of the objectstorage does not support sparse read.

Fixes: https://tracker.ceph.com/issues/41721
Signed-off-by: Kefu Chai <kchai@redhat.com>
@tchaikov
Copy link
Contributor

tchaikov commented Sep 9, 2019

tchaikov added a commit to tchaikov/ceph that referenced this pull request Sep 9, 2019
after ceph#30061, extents returned by sparse read will always be
an empty map as long as the extents to be read is empty or the extent(s)
in it are empty, even of the objectstorage does not support sparse read.

Fixes: https://tracker.ceph.com/issues/41721
Signed-off-by: Kefu Chai <kchai@redhat.com>
tchaikov added a commit to tchaikov/ceph that referenced this pull request Sep 9, 2019
after ceph#30061, extents returned by sparse read will always be
an empty map as long as the extents to be read is empty or the extent(s)
in it are empty, even of the objectstorage does not support sparse read.

Fixes: https://tracker.ceph.com/issues/41721
Signed-off-by: Kefu Chai <kchai@redhat.com>
pdvian pushed a commit to pdvian/ceph that referenced this pull request Sep 12, 2019
after ceph#30061, extents returned by sparse read will always be
an empty map as long as the extents to be read is empty or the extent(s)
in it are empty, even of the objectstorage does not support sparse read.

Fixes: https://tracker.ceph.com/issues/41721
Signed-off-by: Kefu Chai <kchai@redhat.com>
(cherry picked from commit 855d84d)
hairesis pushed a commit to hairesis/ceph that referenced this pull request Sep 18, 2019
after ceph#30061, extents returned by sparse read will always be
an empty map as long as the extents to be read is empty or the extent(s)
in it are empty, even of the objectstorage does not support sparse read.

Fixes: https://tracker.ceph.com/issues/41721
Signed-off-by: Kefu Chai <kchai@redhat.com>
alfonsomthd pushed a commit to rhcs-dashboard/ceph that referenced this pull request Sep 18, 2019
after ceph#30061, extents returned by sparse read will always be
an empty map as long as the extents to be read is empty or the extent(s)
in it are empty, even of the objectstorage does not support sparse read.

Fixes: https://tracker.ceph.com/issues/41721
Signed-off-by: Kefu Chai <kchai@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants