Skip to content

mgr/rook: Rook orchestrator OSD creation using ceph orch apply osd#42757

Merged
josephsawaya merged 7 commits intoceph:masterfrom
josephsawaya:wip-mgr-rook-osd-creation
Aug 19, 2021
Merged

mgr/rook: Rook orchestrator OSD creation using ceph orch apply osd#42757
josephsawaya merged 7 commits intoceph:masterfrom
josephsawaya:wip-mgr-rook-osd-creation

Conversation

@josephsawaya
Copy link

Implement ceph orch apply osd for the rook manager module.

Implement apply_drivegroups method on the RookOrchestrator class.

Add vendor and model information to Device objects in device method in LSOFetcher.

Create DefaultCreator and LSOCreator classes that handle parsing drivegroups, filtering devices by those drivegroups and creating a StorageClassDeviceSet for each of those devices. Class used is based on storage class provided by user in ceph config.

Implement drive_group_loop method to update OSDs based on added PVs or devices that match applied drivegroups.

Create get_storage_class method to check if the storage class provided in the config exists.

Fix convert_size method to support all formats of expressing storage size.

Checklist

  • References tracker ticket
  • Updates documentation if necessary
  • Includes tests for new functionality or reproducer for bug

Show available Jenkins commands
  • jenkins retest this please
  • jenkins test classic perf
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test api
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox

@josephsawaya josephsawaya requested a review from a team as a code owner August 11, 2021 15:38
@josephsawaya josephsawaya force-pushed the wip-mgr-rook-osd-creation branch from 4afa3f8 to 2720959 Compare August 11, 2021 16:32
@josephsawaya josephsawaya changed the title Wip mgr rook osd creation mgr/rook: Rook orchestrator OSD creation using ceph orch apply osd Aug 12, 2021
self.storage_class = storage_class
self.inventory = inventory

def parse_drive_group_size(self, drive_group_size: Optional[str]) -> Tuple[int, int]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Joseph Sawaya added 7 commits August 17, 2021 10:50
This commit adds the PV name, node name, vendor and model to the Device object
created in the LSOFetcher device method, this information is useful for adding
OSDs.

Signed-off-by: Joseph Sawaya <jsawaya@redhat.com>
This commit implements the apply_drivegroups method in the
RookOrchestrator class and creates the DefaultCreator and
LSOCreator classes that handle creating OSDs. The add_osds
method in RookCluster will use each creator based on what
storage class the user provided in the ceph config.

Signed-off-by: Joseph Sawaya <jsawaya@redhat.com>
This commit creates a new threaded methods on the RookCluster class
that keeps the cluster updated by re-applying drive groups in a
loop.

Signed-off-by: Joseph Sawaya <jsawaya@redhat.com>
This commit fixes the apply_drivegroups method in RookOrchestrator
to process the entire list of drive groups passed.

This commit also fixes some coding style errors in RookCluster.

Signed-off-by: Joseph Sawaya <jsawaya@redhat.com>
this commit creates the get_storage_class method on the RookCluster
class used to get the storage class matching the name provided
in the ceph config.

Signed-off-by: Joseph Sawaya <jsawaya@redhat.com>
This commit fixes the convert_size method by getting it
to use the re python module to split the digits and letters
to support all units a PV could be expressed in.

Signed-off-by: Joseph Sawaya <jsawaya@redhat.com>
This commit uses the SizeMatcher Class in the Creator functions
to parse and filter devices according to the size specified in a
drive_group.

Signed-off-by: Joseph Sawaya <jsawaya@redhat.com>
@josephsawaya josephsawaya force-pushed the wip-mgr-rook-osd-creation branch from 2720959 to c6ae95d Compare August 17, 2021 20:08
Copy link
Member

@jmolmo jmolmo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey .. this looks good!! :-)

I have found several things that need to be implemented, but i think that is better to start with this base, and add improvements/increase of functionality with new PRs.
With this change we are able to create OSDs,so this makes possible to start implementation of integration tests and more importent ... opens the door to test other functionalities. :-)
So lets merge this!!!

What I miss, and i think that must be addressed in next steps is:

  1. The OSD service created is not shown in the list of services "ceph orch ls"

  2. Drive group flexibility and possibilities are reduced heavily.
    In my view, We will need at least the possibility to distribute OSDs components between different PVs. (Take a look to [this drive group example])(https://docs.ceph.com/en/latest/cephadm/osd/#dedicated-wal-db)
    In a Bluestore OSD we have the possibility to put in different devices data, wal and db, (using fast devices for wal and db we can improve performance).
    Besides that, Rook is able to configure OSDs in that way...
    So i think that to have this possibility is really needed.

But this opens the door to a new interesting design discussion. How can we differentiate PVs that are going to be used for "wal" or "db" .. Are we going to use labels? should we use a special SC for this devices? lets discuss this point in the orchestrators meeting :-)

  1. There are lot of specific use cases where we can have unexpected results ...
    Example: you can create several PVCs for the same PV... but only one OSD can be placed in the device used by the PV/PVC.
    Maybe we must define clearly what kind of possible configurations we support.

NOTE: Before merge, Please follow the @sebastian-philipp suggestion to avoid duplicate code:

Awesome Work Joseph!!!

@josephsawaya josephsawaya merged commit 1bfde3f into ceph:master Aug 19, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants