Skip to content

ceph-volume: add ceph.osdspec_affinity tag#34436

Merged
jan--f merged 2 commits intoceph:masterfrom
jschmid1:dg_affinity
May 4, 2020
Merged

ceph-volume: add ceph.osdspec_affinity tag#34436
jan--f merged 2 commits intoceph:masterfrom
jschmid1:dg_affinity

Conversation

@jschmid1
Copy link
Contributor

@jschmid1 jschmid1 commented Apr 7, 2020

Signed-off-by: Joshua Schmid jschmid@suse.de

Part 1) of https://tracker.ceph.com/issues/44755
Resolves: https://tracker.ceph.com/issues/44929 (only the DG_AFFINITY part)

Show available Jenkins commands
  • jenkins retest this please
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test dashboard backend
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox

@jschmid1 jschmid1 requested a review from jan--f April 7, 2020 09:30
@jschmid1 jschmid1 requested a review from a team as a code owner April 7, 2020 09:30
@jschmid1
Copy link
Contributor Author

jschmid1 commented Apr 8, 2020

We try to source this information in the orchestrators to map OSDs to their respective OSDSpec. It fells like an awful long way to query ceph-volume on each host to retrieve this information. It would probably be better off in the OSD's metadata (ceph osd metadata).

What do you think @jan--f @jdurgin ?

@jdurgin
Copy link
Member

jdurgin commented Apr 13, 2020

It seems reasonable to report this from the osd metadata - it could go in its own metadata file like fsid/device_class/etc.

@jschmid1 jschmid1 force-pushed the dg_affinity branch 2 times, most recently from 45ce708 to 6d6341b Compare April 29, 2020 15:28
@jschmid1 jschmid1 changed the title ceph-volume: add ceph.drivegroup_affinity tag ceph-volume: add ceph.osdspec_affinity tag Apr 29, 2020
Joshua Schmid added 2 commits April 30, 2020 12:07
Signed-off-by: Joshua Schmid <jschmid@suse.de>
Signed-off-by: Joshua Schmid <jschmid@suse.de>
@jschmid1
Copy link
Contributor Author

It seems reasonable to report this from the osd metadata - it could go in its own metadata file like fsid/device_class/etc.

see #34835

@jschmid1
Copy link
Contributor Author

Copy link
Contributor

@jan--f jan--f left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good. One thing I'm mildly worried about is that this should get backported (as everything in ceph-volume) but I doubt the according OSD patch will. So are we just going to rely on the user doing the right thing? Guarding this with a version check or so seems impractical.

@sebastian-philipp
Copy link
Contributor

we need this backported to octopus only. I hope this will work out.

@jschmid1
Copy link
Contributor Author

jschmid1 commented May 4, 2020

As long as we backport the related PRs at the same time and since we ship ceph-osd, which includes ceph-volume in one container, we should end up with compatible versions..

If we would need a version guard, we should implement this in cephadm I think..

@jan--f
Copy link
Contributor

jan--f commented May 4, 2020

As long as we backport the related PRs at the same time and since we ship ceph-osd, which includes ceph-volume in one container, we should end up with compatible versions..

If we would need a version guard, we should implement this in cephadm I think..

This is not just container related...most clusters out there are not containerized. And I doubt backporting the osd changes to mimic is feasible, is it?

@jschmid1
Copy link
Contributor Author

jschmid1 commented May 4, 2020

As long as we backport the related PRs at the same time and since we ship ceph-osd, which includes ceph-volume in one container, we should end up with compatible versions..
If we would need a version guard, we should implement this in cephadm I think..

This is not just container related...most clusters out there are not containerized. And I doubt backporting the osd changes to mimic is feasible, is it?

Right, but you explicitly have to set the environment variable, which only cephadm implements currently.

@jan--f
Copy link
Contributor

jan--f commented May 4, 2020

I opened https://tracker.ceph.com/issues/45374 for the blacklist part

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants