Skip to content

mgr/orchestrator: Fix ceph orch ls in Rook#39612

Merged
sebastian-philipp merged 1 commit intoceph:masterfrom
jmolmo:fix_rook_orch_ls
Mar 2, 2021
Merged

mgr/orchestrator: Fix ceph orch ls in Rook#39612
sebastian-philipp merged 1 commit intoceph:masterfrom
jmolmo:fix_rook_orch_ls

Conversation

@jmolmo
Copy link
Member

@jmolmo jmolmo commented Feb 22, 2021

Fixes: https://tracker.ceph.com/issues/49411

Signed-off-by: Juan Miguel Olmo Martínez jolmomar@redhat.com

Checklist

  • [ x ] References tracker ticket
  • Updates documentation if necessary
  • Includes tests for new functionality or reproducer for bug

Show available Jenkins commands
  • jenkins retest this please
  • jenkins test classic perf
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test api
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox

@jmolmo jmolmo requested a review from a team February 22, 2021 13:11
@sebastian-philipp
Copy link
Contributor

@varshar16 I know that you looked into adding a Teuthology test a while ago. Did you made any progress or did it stall?

'prometheus': 'prometheus',
'node-exporter': 'node-exporter',
'crash': 'crash',
'crashcollector': 'crashcollector', # Specific Rook Daemon
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm. maybe we should change mgr/rook to return crash as a daemon type for crashcollector? wdyt?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rook's crashcollector app takes the role of the crash service type of the orchestrator interface. As both service types fill in the same role, my thinking was to keep them aligned at the orchestrator level.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This means to "mask" reality. Two bad consequences of that:

  1. We are going to lose (or make more difficult) the possibility of customize/change different things for "crashcollector" pod and "crash" daemon.
  2. The k8s user is going to wonder about what happened with the crashcollector pod., so we will need to explain this in the documentation.

I cannot see a big improvement or advantage masking "crashcollector" pod.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. You are right. Because we are doing a conversion from crashCollector to crash ....

if not cl['spec'].get('crashCollector', {}).get('disable', False):
spec['crash'] = orchestrator.ServiceDescription(
spec=ServiceSpec(
'crash',
placement=PlacementSpec.from_string('*'),
),
size=num_nodes,
container_image_name=image_name,
last_refresh=now,
)

This means that we have the normal output from the orchestrator:

[root@rook-ceph-tools-78cdfd976c-dfs5c /]# ceph orch ls
NAME   RUNNING  REFRESHED  AGE  PLACEMENT  IMAGE NAME          IMAGE ID      
crash      3/3  0s ago     71m  *          jolmomar/ceph:rook  9a420a7fb11e  
mgr        1/1  0s ago     65m  count:1    jolmomar/ceph:rook  9a420a7fb11e  
mon        3/3  0s ago     71m  count:3    jolmomar/ceph:rook  9a420a7fb11e 

But the user must guess that the crashcollector he can see in k8s is crash daemon he can see in the ceph orch command output:

[jolmomar@juanmipc ceph]$ kubectl -n rook-ceph get pods
NAME                                                        READY   STATUS      RESTARTS   AGE
...
rook-ceph-crashcollector-ku-master-00.jm-5fc96cdf46-wq7lq   1/1     Running     0          84m
rook-ceph-crashcollector-ku-worker-00.jm-87449fc54-jf44p    1/1     Running     0          83m
rook-ceph-crashcollector-ku-worker-01.jm-dd855f587-t6rmr    1/1     Running     0          89m
rook-ceph-mgr-a-5c8554f57b-lrwk9                            1/1     Running     0          83m
rook-ceph-mon-a-579657c4f-7dchd                             1/1     Running     0          89m
rook-ceph-mon-b-65946c794d-9xh8j                            1/1     Running     0          84m
rook-ceph-mon-c-6446cf77dd-wdvx2                            1/1     Running     0          84m
rook-ceph-operator-559b6fcf59-x54tl                         1/1     Running     0          91m
rook-ceph-osd-0-c7b6956df-fgnv2                             1/1     Running     0          83m
rook-ceph-osd-1-6755647479-zlbwz                            1/1     Running     0          83m
rook-ceph-osd-2-b48759bb9-2zqls                             1/1     Running     0          83m
...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@varshar16
Copy link
Contributor

varshar16 commented Feb 22, 2021

@varshar16 I know that you looked into adding a Teuthology test a while ago. Did you made any progress or did it stall?

It was stalled as I was fixing mgr/rook bugs. Since most of them are fixed now, I will start looking into testing again.

@jmolmo
Copy link
Member Author

jmolmo commented Feb 22, 2021

jenkins test make check

@sebastian-philipp
Copy link
Contributor

orchestrator/_interface.py:1210:44: E261 at least two spaces before inline comment
1     E261 at least two spaces before inline comment

Fixes: https://tracker.ceph.com/issues/49411

Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants