mgr/cephadm: Add "default_listeners" to nvmeof spec#64210
mgr/cephadm: Add "default_listeners" to nvmeof spec#64210
Conversation
3cc1397 to
60839a7
Compare
|
This pull request can no longer be automatically merged: a rebase is needed and changes have to be manually resolved |
60839a7 to
d0fbc5d
Compare
d0fbc5d to
de6573f
Compare
If the cephadm spec "default_listeners: 1.1.1.*" is present, look for IP address on each host which matches x.x.x.* subnet. And add "default_listeners: 1.1.1.2:4420" to ceph-nvmeof conf. This would be used to auto-create listeners when creating NVMeoF subsystems. Fixes: https://tracker.ceph.com/issues/71860 Signed-off-by: Vallari Agrawal <vallari.agrawal@ibm.com>
de6573f to
90ba2f0
Compare
| if default_listeners: | ||
| listeners_ip = "" | ||
| for listeners_format in default_listeners.split(','): | ||
| hosts = [h.hostname for h in spec.placement.hosts] |
There was a problem hiding this comment.
are we okay with this limiting the feature to only when using direct host lists for placements? For example if I was using labels this wouldn't work. Actually I think you might end up getting a 'NoneType' object is not iterable error if someone tried which we should do something about.
There was a problem hiding this comment.
We also need to support it with labels or with any other form of deployment.
There was a problem hiding this comment.
okay, in that case I'd recommend trying to look at things like how we get the set of peers for keepalived configs. That has the same issue where it needs to know where all the daemons are in order to write the config properly.
There was a problem hiding this comment.
@adk3798 when I test placement with labels, it seems to work. I deployed the service with this command: ceph orch apply nvmeof mypool mygroup1 --placement 'cephnvme-vm14=nvmeof.a;cephnvme-vm13=nvmeof.b;cephnvme-vm12=nvmeof.c;cephnvme-vm11=nvmeof.d'
And exported spec looks like this (I've masked IPs in this output):
service_type: nvmeof
service_id: mypool.mygroup1
service_name: nvmeof.mypool.mygroup1
placement:
hosts:
- cephnvme-vm14=nvmeof.a
- cephnvme-vm13=nvmeof.b
- cephnvme-vm12=nvmeof.c
- cephnvme-vm11=nvmeof.d
spec:
default_listeners: "x.x.x.*"
...
And the hostnames and IPs are working well too.
[11-Sep-2025 10:59:55] INFO config.py:72 (2): default_listeners = cephnvme-vm14=x.x.x.x;cephnvme-vm13=x.x.x.y;cephnvme-vm12=x.x.x.z;cephnvme-vm11=x.x.x.k;
Then I also tested by removing the service again and deploying directly by above spec and that seemed to work well. Is there any other way of deployment that I'm missing?
I'd recommend trying to look at things like how we get the set of peers for keepalived configs.
Please also share details about that, it would be very helpful! Thank you!
There was a problem hiding this comment.
That kind of surprises me, but if it works, it works. Would you mind adding something to this PR to include a teuthology test that includes setting those fields (or an update to https://github.com/ceph/ceph/blob/main/qa/suites/orch/cephadm/smoke-roleless/2-services/nvmeof.yaml, which is quite simplistic at the moment) just so we can make sure that's getting tested?
| if ip_network(n_subnet).version != 4: | ||
| continue |
There was a problem hiding this comment.
is no IPv6 a restriction on the nvmeof side?
|
This pull request has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs for another 30 days. |
|
This pull request has been automatically closed because there has been no activity for 90 days. Please feel free to reopen this pull request (or open a new one) if the proposed change is still appropriate. Thank you for your contribution! |
If the cephadm spec "default_listeners: x.x.x.:4420" is present, look for IP address on each host which matches x.x.x. subnet.
And add "default_listeners: x.x.x.y:4420" to
ceph-nvmeof conf. This would be used to auto-create listeners when creating NVMeoF subsystems.
Related PR: ceph/ceph-nvmeof#1381
Fixes: https://tracker.ceph.com/issues/71860
Contribution Guidelines
To sign and title your commits, please refer to Submitting Patches to Ceph.
If you are submitting a fix for a stable branch (e.g. "quincy"), please refer to Submitting Patches to Ceph - Backports for the proper workflow.
When filling out the below checklist, you may click boxes directly in the GitHub web UI. When entering or editing the entire PR message in the GitHub web UI editor, you may also select a checklist item by adding an
xbetween the brackets:[x]. Spaces and capitalization matter when checking off items this way.Checklist
Show available Jenkins commands
jenkins test classic perfJenkins Job | Jenkins Job Definitionjenkins test crimson perfJenkins Job | Jenkins Job Definitionjenkins test signedJenkins Job | Jenkins Job Definitionjenkins test make checkJenkins Job | Jenkins Job Definitionjenkins test make check arm64Jenkins Job | Jenkins Job Definitionjenkins test submodulesJenkins Job | Jenkins Job Definitionjenkins test dashboardJenkins Job | Jenkins Job Definitionjenkins test dashboard cephadmJenkins Job | Jenkins Job Definitionjenkins test apiJenkins Job | Jenkins Job Definitionjenkins test docsReadTheDocs | Github Workflow Definitionjenkins test ceph-volume allJenkins Jobs | Jenkins Jobs Definitionjenkins test windowsJenkins Job | Jenkins Job Definitionjenkins test rook e2eJenkins Job | Jenkins Job Definition