mgr/volumes: Integrate cephadm with volume nfs interface#34672
mgr/volumes: Integrate cephadm with volume nfs interface#34672batrick merged 28 commits intoceph:masterfrom
Conversation
1813055 to
022060f
Compare
|
jenkins render docs |
|
Doc render available at http://docs.ceph.com/ceph-prs/34672/ |
|
looks great! |
022060f to
d838e23
Compare
|
jenkins render docs |
|
Please don't merge, it requires tests in teuthology. |
a987983 to
9c8eace
Compare
src/pybind/mgr/volumes/fs/nfs.py
Outdated
|
|
||
| def create_empty_rados_obj(self): | ||
| common_conf = 'conf-nfs' | ||
| common_conf = 'conf-nfs.{}'.format(self.cluster_id) |
There was a problem hiding this comment.
Do we need this function?
An empty common config is created by the orchestrator:
ceph/src/pybind/mgr/cephadm/nfs.py
Line 78 in 63d690f
There was a problem hiding this comment.
Based on the discussion here, volume module creates common config.
| def _update_common_conf(self, ex_id): | ||
| common_conf = 'conf-nfs' | ||
| def _update_common_conf(self, cluster_id, ex_id): | ||
| common_conf = 'conf-nfs.ganesha-{}'.format(cluster_id) |
There was a problem hiding this comment.
Can we use the ServiceDescription returned by describe_service?
ceph/src/pybind/mgr/orchestrator/_interface.py
Line 1382 in 63d690f
See the rados_config_location attribute returned by orch ls:
$ ceph orch ls nfs --format yaml
namespace: nfs-ns
placement:
hosts:
- hostname: host1
name: ''
network: ''
pool: cephfs.a.data
service_id: foo
service_name: nfs.foo
service_type: nfs
status:
container_image_id: c10ee0889ebeef6d8b82acdb4064be9208e52d938dfae7603ca228f439c948b3
container_image_name: docker.io/ceph/daemon-base:latest-master-devel
created: '2020-04-27T14:42:40.890439'
last_refresh: '2020-04-27T20:43:52.658584'
rados_config_location: rados://cephfs.a.data/nfs-ns/conf-nfs.foo
running: 1
size: 1
There was a problem hiding this comment.
Again in the same doc, it says Orchestrator ServiceDescription rados_config_location should be removed
There was a problem hiding this comment.
relates to #34592 According to the discussion I had with @epuertat , we should configure the dashboard using mon commands. using https://docs.ceph.com/ceph-prs/33886/api/mon_command_api/#dashboard-set-ganesha-clusters-rados-pool-namespace etc. cc @ceph/dashboard
There was a problem hiding this comment.
Afair we wanted dashboard to call up volume's nfs interface to set-up ganesha clusters. This interface creates a common pool and sets unique namespace for each instance. So why configure dashboard pool and namespace using mon commands?
There was a problem hiding this comment.
Using mon commands will restrict to a single pool/namespace:
https://docs.ceph.com/docs/master/mgr/dashboard/#configuring-nfs-ganesha-in-the-dashboard
Whereas this PR is attempting to configure a namespace per NFS cluster ?
There was a problem hiding this comment.
Yes, that's correct. Each cluster has its own set of configs. NFS clusters do not share anything except the pool.
There was a problem hiding this comment.
hrm, the documentation is unlcear, but it does appear that multiple namespaces can specified via the dashboard mon command (using a comma as the delim) --
There was a problem hiding this comment.
From doc, I see it can be set like this: $ ceph dashboard set-ganesha-clusters-rados-pool-namespace <cluster_id>:<pool_name>[/<namespace>](,<cluster_id>:<pool_name>[/<namespace>])*
It will need to be set every time new cluster is deployed along with old cluster values.
9c8eace to
91ef8e9
Compare
feb620d to
02d6298
Compare
Signed-off-by: Varsha Rao <varao@redhat.com>
Signed-off-by: Varsha Rao <varao@redhat.com>
Signed-off-by: Varsha Rao <varao@redhat.com>
Signed-off-by: Varsha Rao <varao@redhat.com>
CACHEINODE will be deprecated soon. Instead use MDCACHE. Signed-off-by: Varsha Rao <varao@redhat.com>
Signed-off-by: Varsha Rao <varao@redhat.com>
Signed-off-by: Varsha Rao <varao@redhat.com>
check_mon_command() checks the return code of mon command. Signed-off-by: Varsha Rao <varao@redhat.com>
Signed-off-by: Varsha Rao <varao@redhat.com>
Signed-off-by: Varsha Rao <varao@redhat.com>
Signed-off-by: Varsha Rao <varao@redhat.com>
176fbc3 to
719d0c3
Compare
Signed-off-by: Varsha Rao <varao@redhat.com>
|
Failed again, auth delete used to pass earlier. |
`mgr` profile allows 'auth rm'. Use it instead of 'auth del' which is not allowed. Signed-off-by: Varsha Rao <varao@redhat.com>
Signed-off-by: Varsha Rao <varao@redhat.com>
719d0c3 to
b2adff1
Compare
|
@varshar16 could you take a look at https://tracker.ceph.com/issues/46046 ? |
@tchaikov Yes, looking into it. |
Major changes are:
Add placement option to cluster create interface
$ ceph nfs cluster create <type=cephfs> <clusterid> [<placement>]Add cluster delete and update interface
watch_urlin vstartShow available Jenkins commands
jenkins retest this pleasejenkins test classic perfjenkins test crimson perfjenkins test signedjenkins test make checkjenkins test make check arm64jenkins test submodulesjenkins test dashboardjenkins test dashboard backendjenkins test docsjenkins render docsjenkins test ceph-volume alljenkins test ceph-volume tox