Skip to content

mgr/cephadm: add RGW support for NFS ganesha#37600

Merged
mgfritch merged 4 commits intoceph:masterfrom
mgfritch:cephadm-nfs-rgw
Nov 7, 2020
Merged

mgr/cephadm: add RGW support for NFS ganesha#37600
mgfritch merged 4 commits intoceph:masterfrom
mgfritch:cephadm-nfs-rgw

Conversation

@mgfritch
Copy link
Contributor

@mgfritch mgfritch commented Oct 8, 2020

  • create an RGW keyring for NFS daemon access
  • generate RGW FSAL in ganesha.conf

Fixes: https://tracker.ceph.com/issues/43686
Signed-off-by: Michael Fritch mfritch@suse.com

Checklist

  • References tracker ticket
  • Updates documentation if necessary
  • Includes tests for new functionality or reproducer for bug

Show available Jenkins commands
  • jenkins retest this please
  • jenkins test classic perf
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test api
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox

Comment on lines +34 to +38
RGW {
cluster = "ceph";
name = "client.{{ rgw_user }}";
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of adding RGW block by default. It would be good to have export type option in ceph orch apply nfs or let higher level interfaces modify the ganesha conf according to export type. And we have cluster type option in volumes/nfs interface ceph nfs cluster create <type> <clusterid> [<placement>] for the purpose of modifying minimal ganesha conf according to export type. @batrick @jtlayton

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

generally agree our goal is to minimize ganesha.conf into a very simple bootstrap config, but we do need to manage the keyrings (both add/remove) and also mount the keyring into each daemon container.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the problem is that once defined here you can't add other options to the RGW section from another RADOS config. cephadm can setup the keys but I think it'd be better for the higher level config to write the RGW section.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been able to overwrite this block from the higher level config, but the entire block is replaced, which is problematic because this would require the higher level workflow to also have knowledge of the keyring entity.

@batrick batrick added the rgw label Oct 9, 2020
mounts[os.path.join(data_dir, 'etc/ganesha')] = '/etc/ganesha:z'
if self.rgw:
cluster = self.rgw.get('cluster', 'ceph')
rgw_user = self.rgw.get('user', 'admin')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we do this elsewhere? I don't think we should use the admin credential as a default.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would a NoneType be better here? At this point, we've already generated a client keyring and the input validation should have raised an exception if no keyring was provided ...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would a NoneType be better here?

Yes, I think so.

'prefix': 'auth get-or-create',
'entity': entity,
'caps': ['mon', 'allow r',
'osd', 'allow rwx'],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does NFS-Ganesha need access to the OSDs when talking to RGW?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have a more restrictive set of perms in mind?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it just needs mon "allow r"? cc @cbodley @mattbenjamin

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

librgw is a librados client. does that mean it needs osd caps?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought librgw was just talking to the monitors and not the OSDs but maybe I misunderstood.

In any case, it should not have unlimited access to the OSDs with osd "allow rwx".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

librgw, when initialized, starts an entire radosgw absent only the HTTP front-ends and (by default) some of the background services such as lifecycle. By design, it accesses services in the Ceph cluster the same as an ordinary radosgw would. It needs the same permissions to execute--and even identifies itself with the same client string.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. Thanks for explaining @mattbenjamin. How do you normally restrict rgw instances though in terms of osd caps?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@batrick it is possible to restrict rados caps to just the specific pools that a radosgw cluster needs for index, data, log, and data-extra. The most complete example I know of is by @leseb recently in setting up rook external clusters. We could probably re-use that here.

@mattbenjamin
Copy link
Contributor

@mgfritch (folks), where does the ganesha.conf come from? There are options here we'd like to check. Also, potentially allow selection of rgw_nfs_fast_attrs.
@kalebskeithley

@batrick
Copy link
Member

batrick commented Oct 12, 2020

@mgfritch (folks), where does the ganesha.conf come from? There are options here we'd like to check. Also, potentially allow selection of rgw_nfs_fast_attrs.

https://github.com/ceph/ceph/pull/37600/files#diff-d19145aa3232f726604da0f1f3c6bba7

We've taken to calling this the "bootstrap" config because most configuration should come from a higher layer like the volumes nfs driver.

@mgfritch
Copy link
Contributor Author

a "bootstrap" ganesha.conf is rendered from within mgr/cephadm and injected into each daemon container. The bootstrap config defines a %url directive for additional cluster-wide or user-defined configuration blocks via a RADOS object.

Could we have an RGW block defined in both the bootstrap ganesha.conf and also another RGW block later defined via a RADOS conifg object. ... but I think that would require the ganesha daemon to take a union on the config values instead of the current behavior (which is replace the entire RGW block) ?

@batrick
Copy link
Member

batrick commented Oct 12, 2020

Could we have an RGW block defined in both the bootstrap ganesha.conf and also another RGW block later defined via a RADOS conifg object. ... but I think that would require the ganesha daemon to take a union on the config values instead of the current behavior (which is replace the entire RGW block) ?

As I said in the other comment, it does not take a union so if cephadm defines it then it cannot be overridden TMK.

@mgfritch
Copy link
Contributor Author

mgfritch commented Oct 12, 2020

Could we have an RGW block defined in both the bootstrap ganesha.conf and also another RGW block later defined via a RADOS conifg object. ... but I think that would require the ganesha daemon to take a union on the config values instead of the current behavior (which is replace the entire RGW block) ?

As I said in the other comment, it does not take a union so if cephadm defines it then it cannot be overridden TMK.

I think this was true in older versions of ganesha, but with ganesha v3.3, it does seem the that the %url directive will now replace the entire block (rather than simply ignore), but a union of config values still seems like a good way to allow multiple consumers to define/overwrite various items in the conf blocks ... for example a user-config where we only want to change a single conf value and not rewrite the entire config block.

Fixes: https://tracker.ceph.com/issues/43686
Signed-off-by: Michael Fritch <mfritch@suse.com>
- create an RGW keyring for NFS daemon access
- generate RGW FSAL in ganesha.conf

Fixes: https://tracker.ceph.com/issues/43686
Signed-off-by: Michael Fritch <mfritch@suse.com>
remove RGW keyring during NFS daemon `post_remove`

Fixes: https://tracker.ceph.com/issues/43686
Signed-off-by: Michael Fritch <mfritch@suse.com>
Copy link

@Daniel-Pivonka Daniel-Pivonka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This functionally worked for me i was able to put files in buckets and they would show up on the nfs mount

@mattbenjamin
Copy link
Contributor

@Daniel-Pivonka woot

@mattbenjamin
Copy link
Contributor

@Daniel-Pivonka @kalebskeithley @dang should we have a review of the configs that can now be generated? I'd like to ensure that our preferred configurations (incl. fast-attrs, above) can be set up.

Copy link
Contributor

@bk201 bk201 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

work for using Dashboard to create RGW share.
I can access files in buckets with RO mode.

Copy link
Member

@jmolmo jmolmo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can go with this.
Probably we will need to improve things around how to setup the service in a more simple way. But this modifications are definitely a first and a big step. :-)

restrict the OSD keyring caps to the `rgw` application

Signed-off-by: Michael Fritch <mfritch@suse.com>
@mgfritch mgfritch requested a review from batrick October 28, 2020 20:33
@mgfritch
Copy link
Contributor Author

mgfritch commented Nov 7, 2020

@mgfritch mgfritch merged commit 33a907a into ceph:master Nov 7, 2020
@mgfritch mgfritch deleted the cephadm-nfs-rgw branch November 7, 2020 01:10
mgfritch added a commit to mgfritch/ceph that referenced this pull request Mar 25, 2021
PR ceph#37600 introduced support for both cephfs and rgw while
using the same nfs-ganesha daemon

Signed-off-by: Michael Fritch <mfritch@suse.com>
mgfritch added a commit to mgfritch/ceph that referenced this pull request Mar 25, 2021
PR ceph#37600 introduced support for both `cephfs` and `rgw` exports
to be configured using a single nfs-ganesha cluster

Signed-off-by: Michael Fritch <mfritch@suse.com>
mgfritch added a commit to mgfritch/ceph that referenced this pull request Apr 15, 2021
PR ceph#37600 introduced support for both `cephfs` and `rgw` exports
to be configured using a single nfs-ganesha cluster

Fixes: https://tracker.ceph.com/issues/50369
Signed-off-by: Michael Fritch <mfritch@suse.com>
varshar16 pushed a commit to varshar16/ceph that referenced this pull request Apr 23, 2021
PR ceph#37600 introduced support for both `cephfs` and `rgw` exports
to be configured using a single nfs-ganesha cluster

Fixes: https://tracker.ceph.com/issues/50369
Signed-off-by: Michael Fritch <mfritch@suse.com>
(cherry picked from commit e9b2c38)
mgfritch added a commit to mgfritch/ceph that referenced this pull request Apr 23, 2021
PR ceph#37600 introduced support for both `cephfs` and `rgw` exports
to be configured using a single nfs-ganesha cluster

Fixes: https://tracker.ceph.com/issues/50369
Signed-off-by: Michael Fritch <mfritch@suse.com>
(cherry picked from commit e9b2c38)
varshar16 pushed a commit to varshar16/ceph that referenced this pull request May 18, 2021
PR ceph#37600 introduced support for both `cephfs` and `rgw` exports
to be configured using a single nfs-ganesha cluster

Fixes: https://tracker.ceph.com/issues/50369
Signed-off-by: Michael Fritch <mfritch@suse.com>
(cherry picked from commit e9b2c38)
nizamial09 pushed a commit to rhcs-dashboard/ceph that referenced this pull request Jun 11, 2021
PR ceph#37600 introduced support for both `cephfs` and `rgw` exports
to be configured using a single nfs-ganesha cluster

Fixes: https://tracker.ceph.com/issues/50369
Signed-off-by: Michael Fritch <mfritch@suse.com>
(cherry picked from commit e9b2c38)
(cherry picked from commit 678ee63)

Resolves: rhbz#1951969
smithfarm added a commit to smithfarm/sesdev that referenced this pull request Nov 5, 2021
At some point, upstream obviously changed the syntax of the "ceph nfs cluster
create" command in octopus, thereby breaking "sesdev create pacific" and "sesdev
create ses7".

Later, this syntax change got reverted [1], but since this revert was made
possible by a big feature [2], so far it has been backported only to pacific [3]

[1] ceph/ceph#40411
[2] ceph/ceph#37600
[3] ceph/ceph#41005

Fixes: SUSE#611

Signed-off-by: Nathan Cutler <ncutler@suse.com>
smithfarm added a commit to smithfarm/sesdev that referenced this pull request Nov 5, 2021
At some point, upstream obviously changed the syntax of the "ceph nfs cluster
create" command in octopus, thereby breaking "sesdev create pacific" and "sesdev
create ses7".

Later, this syntax change got reverted [1], but since this revert was made
possible by a big feature [2], so far it has been backported only to pacific [3]

[1] ceph/ceph#40411
[2] ceph/ceph#37600
[3] ceph/ceph#41005

Fixes: SUSE#611

Signed-off-by: Nathan Cutler <ncutler@suse.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants