mgr/rook: Rook orchestrator OSD creation using ceph orch apply osd#42757
mgr/rook: Rook orchestrator OSD creation using ceph orch apply osd#42757josephsawaya merged 7 commits intoceph:masterfrom
ceph orch apply osd#42757Conversation
4afa3f8 to
2720959
Compare
ceph orch apply osd
src/pybind/mgr/rook/rook_cluster.py
Outdated
| self.storage_class = storage_class | ||
| self.inventory = inventory | ||
|
|
||
| def parse_drive_group_size(self, drive_group_size: Optional[str]) -> Tuple[int, int]: |
There was a problem hiding this comment.
Note we have that code already in ceph: https://github.com/ceph/ceph/blob/master/src/python-common/ceph/deployment/drive_group.py
This commit adds the PV name, node name, vendor and model to the Device object created in the LSOFetcher device method, this information is useful for adding OSDs. Signed-off-by: Joseph Sawaya <jsawaya@redhat.com>
This commit implements the apply_drivegroups method in the RookOrchestrator class and creates the DefaultCreator and LSOCreator classes that handle creating OSDs. The add_osds method in RookCluster will use each creator based on what storage class the user provided in the ceph config. Signed-off-by: Joseph Sawaya <jsawaya@redhat.com>
This commit creates a new threaded methods on the RookCluster class that keeps the cluster updated by re-applying drive groups in a loop. Signed-off-by: Joseph Sawaya <jsawaya@redhat.com>
This commit fixes the apply_drivegroups method in RookOrchestrator to process the entire list of drive groups passed. This commit also fixes some coding style errors in RookCluster. Signed-off-by: Joseph Sawaya <jsawaya@redhat.com>
this commit creates the get_storage_class method on the RookCluster class used to get the storage class matching the name provided in the ceph config. Signed-off-by: Joseph Sawaya <jsawaya@redhat.com>
This commit fixes the convert_size method by getting it to use the re python module to split the digits and letters to support all units a PV could be expressed in. Signed-off-by: Joseph Sawaya <jsawaya@redhat.com>
This commit uses the SizeMatcher Class in the Creator functions to parse and filter devices according to the size specified in a drive_group. Signed-off-by: Joseph Sawaya <jsawaya@redhat.com>
2720959 to
c6ae95d
Compare
jmolmo
left a comment
There was a problem hiding this comment.
Hey .. this looks good!! :-)
I have found several things that need to be implemented, but i think that is better to start with this base, and add improvements/increase of functionality with new PRs.
With this change we are able to create OSDs,so this makes possible to start implementation of integration tests and more importent ... opens the door to test other functionalities. :-)
So lets merge this!!!
What I miss, and i think that must be addressed in next steps is:
-
The OSD service created is not shown in the list of services "ceph orch ls"
-
Drive group flexibility and possibilities are reduced heavily.
In my view, We will need at least the possibility to distribute OSDs components between different PVs. (Take a look to [this drive group example])(https://docs.ceph.com/en/latest/cephadm/osd/#dedicated-wal-db)
In a Bluestore OSD we have the possibility to put in different devices data, wal and db, (using fast devices for wal and db we can improve performance).
Besides that, Rook is able to configure OSDs in that way...
So i think that to have this possibility is really needed.
But this opens the door to a new interesting design discussion. How can we differentiate PVs that are going to be used for "wal" or "db" .. Are we going to use labels? should we use a special SC for this devices? lets discuss this point in the orchestrators meeting :-)
- There are lot of specific use cases where we can have unexpected results ...
Example: you can create several PVCs for the same PV... but only one OSD can be placed in the device used by the PV/PVC.
Maybe we must define clearly what kind of possible configurations we support.
NOTE: Before merge, Please follow the @sebastian-philipp suggestion to avoid duplicate code:
Awesome Work Joseph!!!
Implement
ceph orch apply osdfor the rook manager module.Implement
apply_drivegroupsmethod on the RookOrchestrator class.Add vendor and model information to
Deviceobjects indevicemethod inLSOFetcher.Create
DefaultCreatorandLSOCreatorclasses that handle parsing drivegroups, filtering devices by those drivegroups and creating a StorageClassDeviceSet for each of those devices. Class used is based on storage class provided by user in ceph config.Implement
drive_group_loopmethod to update OSDs based on added PVs or devices that match applied drivegroups.Create
get_storage_classmethod to check if the storage class provided in the config exists.Fix
convert_sizemethod to support all formats of expressing storage size.Checklist
Show available Jenkins commands
jenkins retest this pleasejenkins test classic perfjenkins test crimson perfjenkins test signedjenkins test make checkjenkins test make check arm64jenkins test submodulesjenkins test dashboardjenkins test apijenkins test docsjenkins render docsjenkins test ceph-volume alljenkins test ceph-volume tox