Skip to content

mgr/cephadm: convert tags to repo_digest#36432

Merged
sebastian-philipp merged 5 commits intoceph:masterfrom
sebastian-philipp:cephadm-repo_digest
Sep 7, 2020
Merged

mgr/cephadm: convert tags to repo_digest#36432
sebastian-philipp merged 5 commits intoceph:masterfrom
sebastian-philipp:cephadm-repo_digest

Conversation

@sebastian-philipp
Copy link
Contributor

@sebastian-philipp sebastian-philipp commented Aug 3, 2020

Blocked by

This PR allows you to run

$ ceph config set mgr mgr/cephadm/use_repo_digest true
$ ceph orch upgrade ceph/ceph:lastest

And cephadm will then implicitly convert :latest to the recent sha256 digest and us the digest instead.

Or, it provides a way to convert the global container_image config to the digest:

$ ceph config set global container_image ceph/ceph:lastest
$ ceph config set mgr mgr/cephadm/use_repo_digest true
$ ceph config get global container_image

TODO

  • finish the implementation
  • pytest

Follow-up PRs / known issues:

  • podman ps shows the tag name instead of the digest which is irritating
  • missing docs

Checklist

  • References tracker ticket
  • Updates documentation if necessary
  • Includes tests for new functionality or reproducer for bug

Show available Jenkins commands
  • jenkins retest this please
  • jenkins test classic perf
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test dashboard backend
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox

Copy link
Contributor

@varshar16 varshar16 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nits:

@varshar16
Copy link
Contributor

Please update vstart too.

@sebastian-philipp
Copy link
Contributor Author

Please update vstart too.

wdyt of the overall approach?

@varshar16
Copy link
Contributor

Please update vstart too.

wdyt of the overall approach?

It looks good.

Copy link
Contributor

@varshar16 varshar16 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  File "/home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/cephadm/module.py", line 1092, in _remote_connection
    yield (conn, connr)
  File "/home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/cephadm/module.py", line 1135, in _run_cephadm
    image = self._get_container_image(entity)
  File "/home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/cephadm/module.py", line 1066, in _get_container_image
    assert False, daemon_type
AssertionError: client
ERROR    orchestrator._interface:_interface.py:346 _Promise failed
Traceback (most recent call last):
  File "/home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/orchestrator/_interface.py", line 299, in _finalize
    next_result = self._on_complete(self._value)
  File "/home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/cephadm/module.py", line 102, in <lambda>
    return CephadmCompletion(on_complete=lambda _: f(*args, **kwargs))
  File "/home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/cephadm/module.py", line 1229, in add_host
    return self._add_host(spec)
  File "/home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/cephadm/module.py", line 1215, in _add_host
    error_ok=True, no_fsid=True)
  File "/home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/cephadm/module.py", line 1135, in _run_cephadm
    image = self._get_container_image(entity)
  File "/home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/cephadm/module.py", line 1066, in _get_container_image
    assert False, daemon_type
AssertionError: client

@sebastian-philipp sebastian-philipp force-pushed the cephadm-repo_digest branch 3 times, most recently from 6c7f174 to e6de0f3 Compare August 18, 2020 13:52
@sebastian-philipp sebastian-philipp marked this pull request as ready for review August 18, 2020 13:53
@sebastian-philipp sebastian-philipp changed the title [WIP] mgr/cephadm: convert tags to repo_digest mgr/cephadm: convert tags to repo_digest Aug 19, 2020
@sebastian-philipp
Copy link
Contributor Author

Please update vstart too.

Hm. turns out: we have different goals here:

  • vstart tires to pull the latest repo digest
  • This PR tries to not pull anything and is only concerted about having a consistent cluster.

I can see your point, but I think this would be a follow-up PR.

@varshar16
Copy link
Contributor

friendly ping: Are you ok with merging this anyway and try to improve cephadm ls in a follow-up PR?

Please update the doc too either in this PR or follow-up.

The get global container_image still fails for me:

[root@varsha build]# ./bin/ceph config set global container_image ceph/daemon-base:latest
[root@varsha build]# ./bin/ceph config set mgr mgr/cephadm/use_repo_digest true 
[root@varsha build]# ./bin/ceph config get global container_image
Error EINVAL: unrecognized entity 'global'

Copy link
Contributor

@varshar16 varshar16 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test actually did not work if use_repo_digest is set:

2020-08-28T19:32:47.906 INFO:teuthology.orchestra.run.smithi051.stdout:NAME             HOST       STATUS          REFRESHED  AGE  VERSION               IMAGE NAME                                                          IMAGE ID      CONTAINER ID
2020-08-28T19:32:47.907 INFO:teuthology.orchestra.run.smithi051.stdout:alertmanager.a   smithi051  running (3m)    3s ago     6m   0.21.0                docker.io/prom/alertmanager:latest                                  c876f5897d7b  702b3f51c5fe
2020-08-28T19:32:47.907 INFO:teuthology.orchestra.run.smithi051.stdout:grafana.a        smithi038  running (5m)    1s ago     5m   6.6.2                 docker.io/ceph/ceph-grafana:latest                                  87a51ecf0b1c  76b91cae48d6
2020-08-28T19:32:47.907 INFO:teuthology.orchestra.run.smithi051.stdout:mgr.x            smithi038  running (3m)    1s ago     8m   16.0.0-4841-gddc6595  quay.ceph.io/ceph-ci/ceph:ddc65956ac220ef29f86671e4c9a8bcbce4d500f  515273432f01  97b8237794b0
2020-08-28T19:32:47.907 INFO:teuthology.orchestra.run.smithi051.stdout:mgr.y            smithi051  running (104s)  3s ago     10m  16.0.0-4841-gddc6595  quay.ceph.io/ceph-ci/ceph:ddc65956ac220ef29f86671e4c9a8bcbce4d500f  515273432f01  f5285b655a81
2020-08-28T19:32:47.908 INFO:teuthology.orchestra.run.smithi051.stdout:mon.a            smithi051  running (2m)    3s ago     10m  16.0.0-4841-gddc6595  quay.ceph.io/ceph-ci/ceph:ddc65956ac220ef29f86671e4c9a8bcbce4d500f  515273432f01  cdf5ec15899e
2020-08-28T19:32:47.908 INFO:teuthology.orchestra.run.smithi051.stdout:mon.b            smithi038  running (2m)    1s ago     9m   16.0.0-4841-gddc6595  quay.ceph.io/ceph-ci/ceph:ddc65956ac220ef29f86671e4c9a8bcbce4d500f  515273432f01  8211285b9228
2020-08-28T19:32:47.908 INFO:teuthology.orchestra.run.smithi051.stdout:mon.c            smithi051  running (2m)    3s ago     9m   16.0.0-4841-gddc6595  quay.ceph.io/ceph-ci/ceph:ddc65956ac220ef29f86671e4c9a8bcbce4d500f  515273432f01  2d6be1cf5511
2020-08-28T19:32:47.908 INFO:teuthology.orchestra.run.smithi051.stdout:node-exporter.a  smithi051  running (6m)    3s ago     6m   1.0.1                 docker.io/prom/node-exporter:latest                                 0e0218889c33  cd0c8c8ad842
2020-08-28T19:32:47.909 INFO:teuthology.orchestra.run.smithi051.stdout:node-exporter.b  smithi038  running (6m)    1s ago     6m   1.0.1                 docker.io/prom/node-exporter:latest                                 0e0218889c33  5651378ef705
2020-08-28T19:32:47.909 INFO:teuthology.orchestra.run.smithi051.stdout:osd.0            smithi051  running (30s)   3s ago     8m   16.0.0-4841-gddc6595  quay.ceph.io/ceph-ci/ceph:ddc65956ac220ef29f86671e4c9a8bcbce4d500f  515273432f01  c2fd2ae5820b
2020-08-28T19:32:47.909 INFO:teuthology.orchestra.run.smithi051.stdout:osd.1            smithi051  running (20s)   3s ago     8m   16.0.0-4841-gddc6595  quay.ceph.io/ceph-ci/ceph:ddc65956ac220ef29f86671e4c9a8bcbce4d500f  515273432f01  2060c8529ce6
2020-08-28T19:32:47.909 INFO:teuthology.orchestra.run.smithi051.stdout:osd.2            smithi051  running (12s)   3s ago     8m   16.0.0-4841-gddc6595  quay.ceph.io/ceph-ci/ceph:ddc65956ac220ef29f86671e4c9a8bcbce4d500f  515273432f01  fdb420ad7983
2020-08-28T19:32:47.909 INFO:teuthology.orchestra.run.smithi051.stdout:osd.3            smithi051  running (4s)    3s ago     7m   16.0.0-4841-gddc6595  quay.ceph.io/ceph-ci/ceph:ddc65956ac220ef29f86671e4c9a8bcbce4d500f  515273432f01  5c3e6ac126fd
2020-08-28T19:32:47.910 INFO:teuthology.orchestra.run.smithi051.stdout:osd.4            smithi038  running (96s)   1s ago     7m   16.0.0-4841-gddc6595  quay.ceph.io/ceph-ci/ceph:ddc65956ac220ef29f86671e4c9a8bcbce4d500f  515273432f01  e2a4d5dc9712
2020-08-28T19:32:47.910 INFO:teuthology.orchestra.run.smithi051.stdout:osd.5            smithi038  running (86s)   1s ago     7m   16.0.0-4841-gddc6595  quay.ceph.io/ceph-ci/ceph:ddc65956ac220ef29f86671e4c9a8bcbce4d500f  515273432f01  5d522fa4fbc7
2020-08-28T19:32:47.910 INFO:teuthology.orchestra.run.smithi051.stdout:osd.6            smithi038  running (76s)   1s ago     7m   16.0.0-4841-gddc6595  quay.ceph.io/ceph-ci/ceph:ddc65956ac220ef29f86671e4c9a8bcbce4d500f  515273432f01  86ad28ae25b8
2020-08-28T19:32:47.910 INFO:teuthology.orchestra.run.smithi051.stdout:osd.7            smithi038  running (53s)   1s ago     6m   16.0.0-4841-gddc6595  quay.ceph.io/ceph-ci/ceph:ddc65956ac220ef29f86671e4c9a8bcbce4d500f  515273432f01  8f023675de2b
2020-08-28T19:32:47.910 INFO:teuthology.orchestra.run.smithi051.stdout:prometheus.a     smithi038  running (5m)    1s ago     6m   2.20.1                docker.io/prom/prometheus:latest                                    b205ccdd28d3  8ac16cf9358a

http://qa-proxy.ceph.com/teuthology/swagner-2020-08-28_15:32:32-rados:cephadm-wip-swagner3-testing-2020-08-28-1412-distro-basic-smithi/5384246/teuthology.log

@sebastian-philipp
Copy link
Contributor Author

Error EINVAL: unrecognized entity 'global'

I do know for sure that the global container_image option is set. (you can verify this via config dump). But seems that it's not that easy to show this value

@varshar16
Copy link
Contributor

Error EINVAL: unrecognized entity 'global'

I do know for sure that the global container_image option is set. (you can verify this via config dump). But seems that it's not that easy to show this value

I don't think the error is due to this PR. I get the same result even on master branch.

Before setting

[root@varsha build]# ./bin/ceph config dump
WHO     MASK  LEVEL     OPTION                                               VALUE                                                                                               RO
global        basic     container_image                                      docker.io/ceph/daemon-base@sha256:aaf292561f22c4b881b27cd48d8e440b52d6b440cd4a587f6fcab44996843804  * 

After setting

[root@varsha build]# ./bin/ceph config set global container_image ceph/daemon-base:latest
[root@varsha build]# ./bin/ceph config dump
WHO     MASK  LEVEL     OPTION                                               VALUE                              RO
global        basic     container_image                                      ceph/daemon-base:latest            *

[root@varsha build]# ./bin/ceph config get global container_image
Error EINVAL: unrecognized entity 'global'

Signed-off-by: Sebastian Wagner <sebastian.wagner@suse.com>
Signed-off-by: Sebastian Wagner <sebastian.wagner@suse.com>
Signed-off-by: Sebastian Wagner <sebastian.wagner@suse.com>
As this is the most interesting test suite

Signed-off-by: Sebastian Wagner <sebastian.wagner@suse.com>
* Automatically convert tags like `:latest` to the digest
* Use the digest instead of the tag

Signed-off-by: Sebastian Wagner <sebastian.wagner@suse.com>
@sebastian-philipp
Copy link
Contributor Author

rebased

'name': 'use_repo_digest',
'type': 'bool',
'default': False,
'desc': 'Automatically convert image tags to image digest. Make sure all daemons use the same image',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The description sounds like the behavior users would expect to be active by default, so I'm confused why the default is set to "False".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Eh, the description describes the flag itself. Independent of the value.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to use true as default value. Is the safe option!

@sebastian-philipp
Copy link
Contributor Author

@varshar16

according to

http://qa-proxy.ceph.com/teuthology/swagner-2020-09-03_11:01:59-rados:cephadm-wip-swagner3-testing-2020-09-03-1047-distro-basic-smithi/5402458/remote/ubuntu%40smithi098.front.sepia.ceph.com/log/

The teuthology log looks like so:

2020-09-03T12:16:42.920 INFO:teuthology.orchestra.run.smithi098.stdout:true
2020-09-03T12:16:43.223 INFO:teuthology.orchestra.run.smithi098.stdout:NAME             HOST       STATUS         REFRESHED  AGE  VERSION               IMAGE NAME                                                          IMAGE ID      CONTAINER ID
2020-09-03T12:16:43.223 INFO:teuthology.orchestra.run.smithi098.stdout:alertmanager.a   smithi098  running (37s)  1s ago     4m   0.21.0                docker.io/prom/alertmanager:latest                                  c876f5897d7b  aaecbc922db7
2020-09-03T12:16:43.223 INFO:teuthology.orchestra.run.smithi098.stdout:grafana.a        smithi186  running (3m)   0s ago     3m   6.6.2                 docker.io/ceph/ceph-grafana:latest                                  87a51ecf0b1c  e68b7d8b95d4
2020-09-03T12:16:43.224 INFO:teuthology.orchestra.run.smithi098.stdout:mgr.x            smithi186  running (68s)  0s ago     6m   16.0.0-5080-gce97282  quay.ceph.io/ceph-ci/ceph:ce97282e2d1d22e6f86a056c004b027f1d691e4b  571a1142cbef  da2ad010b0de
2020-09-03T12:16:43.224 INFO:teuthology.orchestra.run.smithi098.stdout:mgr.y            smithi098  running (19s)  1s ago     8m   16.0.0-5080-gce97282  quay.ceph.io/ceph-ci/ceph:ce97282e2d1d22e6f86a056c004b027f1d691e4b  571a1142cbef  c2b1a4547c48
2020-09-03T12:16:43.224 INFO:teuthology.orchestra.run.smithi098.stdout:mon.a            smithi098  running (11s)  1s ago     8m   16.0.0-5080-gce97282  quay.ceph.io/ceph-ci/ceph:ce97282e2d1d22e6f86a056c004b027f1d691e4b  571a1142cbef  8ebba3204da5
2020-09-03T12:16:43.224 INFO:teuthology.orchestra.run.smithi098.stdout:mon.b            smithi186  running (6m)   0s ago     6m   15.2.0                docker.io/ceph/ceph:v15.2.0                                         204a01f9b0b6  459bc2183f95
2020-09-03T12:16:43.224 INFO:teuthology.orchestra.run.smithi098.stdout:mon.c            smithi098  running (7m)   1s ago     7m   15.2.0                docker.io/ceph/ceph:v15.2.0                                         204a01f9b0b6  92c35eaa52d7
2020-09-03T12:16:43.225 INFO:teuthology.orchestra.run.smithi098.stdout:node-exporter.a  smithi098  running (4m)   1s ago     4m   1.0.1                 docker.io/prom/node-exporter:latest                                 0e0218889c33  69451d728f18
2020-09-03T12:16:43.225 INFO:teuthology.orchestra.run.smithi098.stdout:node-exporter.b  smithi186  running (4m)   0s ago     4m   1.0.1                 docker.io/prom/node-exporter:latest                                 0e0218889c33  fe5f6d8a11c6
2020-09-03T12:16:43.225 INFO:teuthology.orchestra.run.smithi098.stdout:osd.0            smithi098  running (6m)   1s ago     6m   15.2.0                docker.io/ceph/ceph:v15.2.0                                         204a01f9b0b6  2431ae140b47
2020-09-03T12:16:43.225 INFO:teuthology.orchestra.run.smithi098.stdout:osd.1            smithi098  running (6m)   1s ago     6m   15.2.0                docker.io/ceph/ceph:v15.2.0                                         204a01f9b0b6  044a3c00dee4
2020-09-03T12:16:43.225 INFO:teuthology.orchestra.run.smithi098.stdout:osd.2            smithi098  running (5m)   1s ago     5m   15.2.0                docker.io/ceph/ceph:v15.2.0                                         204a01f9b0b6  291e9d79acdc
2020-09-03T12:16:43.226 INFO:teuthology.orchestra.run.smithi098.stdout:osd.3            smithi098  running (5m)   1s ago     5m   15.2.0                docker.io/ceph/ceph:v15.2.0                                         204a01f9b0b6  baf0aa2e72fe
2020-09-03T12:16:43.226 INFO:teuthology.orchestra.run.smithi098.stdout:osd.4            smithi186  running (5m)   0s ago     5m   15.2.0                docker.io/ceph/ceph:v15.2.0                                         204a01f9b0b6  809fc75bbe41
2020-09-03T12:16:43.226 INFO:teuthology.orchestra.run.smithi098.stdout:osd.5            smithi186  running (5m)   0s ago     5m   15.2.0                docker.io/ceph/ceph:v15.2.0                                         204a01f9b0b6  0597f0a52c7d
2020-09-03T12:16:43.226 INFO:teuthology.orchestra.run.smithi098.stdout:osd.6            smithi186  running (4m)   0s ago     4m   15.2.0                docker.io/ceph/ceph:v15.2.0                                         204a01f9b0b6  f26e27c560e5
2020-09-03T12:16:43.226 INFO:teuthology.orchestra.run.smithi098.stdout:osd.7            smithi186  running (4m)   0s ago     4m   15.2.0                docker.io/ceph/ceph:v15.2.0                                         204a01f9b0b6  473ae7817dd8
2020-09-03T12:16:43.227 INFO:teuthology.orchestra.run.smithi098.stdout:prometheus.a     smithi186  running (3m)   0s ago     4m   2.20.1                docker.io/prom/prometheus:latest                                    b205ccdd28d3  e4cea2439f5b

Not that mgr.y is shown as using quay.ceph.io/ceph-ci/ceph:ce97282e2d1d22e6f86a056c004b027f1d691e4b. But when looking at the cephadm.log on smithi098, we can see:

cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph@sha256:e7b7b8a04f7579d199cdfbb83cfea7ae372c85a14a737c904ebd20249468d40d', 'deploy', '--fsid', '0cbc5e34-edde-11ea-a080-001a4aab830c', '--name', 'mgr.y', '--config-json', '-', '--allow-ptrace']
2020-09-03 12:16:11,081 DEBUG Running command: systemctl is-enabled ceph-0cbc5e34-edde-11ea-a080-001a4aab830c@mgr.y
2020-09-03 12:16:11,089 DEBUG systemctl:stdout enabled
2020-09-03 12:16:11,089 DEBUG Running command: systemctl is-active ceph-0cbc5e34-edde-11ea-a080-001a4aab830c@mgr.y
2020-09-03 12:16:11,094 DEBUG systemctl:stdout active
2020-09-03 12:16:11,094 INFO Redeploy daemon mgr.y ...

Which means, the container image is actually correct!

@varshar16 are you ok with merging this now?

@sebastian-philipp
Copy link
Contributor Author


def get_image_info_from_inspect(out, image):
# type: (str, str) -> Dict[str, str]
image_id, digests = out.split(',', 1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe more coherent if you move this before you check if out is empty

cd.command_registry_login()
assert str(e.value) == "Failed to login to custom registry @ sample-url as sample-user with given password"

def test_get_image_info_from_inspect(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about to test it with different combinations of "image id" and "repo_digest" empty?

self.registry_url: Optional[str] = None
self.registry_username: Optional[str] = None
self.registry_password: Optional[str] = None
self.use_repo_digest = False
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I insist. True by default seems more safe for everybody

self.log.debug(f'image {image_name} -> {r}')
return r
except (ValueError, KeyError) as _:
msg = 'Failed to pull %s on %s: %s' % (image_name, host, '\n'.join(out))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You haven't failed pulling the image, you have failed trying to process the output of the command

Copy link
Member

@jmolmo jmolmo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just Minor nits... and this is something that can avoid "weird" problems!!!

Comment on lines +536 to +553
def convert_tags_to_repo_digest(self):
if not self.use_repo_digest:
return
settings = self.upgrade.get_distinct_container_image_settings()
digests: Dict[str, ContainerInspectInfo] = {}
for container_image_ref in set(settings.values()):
if not is_repo_digest(container_image_ref):
image_info = self._get_container_image_info(container_image_ref)
if image_info.repo_digest:
assert is_repo_digest(image_info.repo_digest), image_info
digests[container_image_ref] = image_info

for entity, container_image_ref in settings.items():
if not is_repo_digest(container_image_ref):
image_info = digests[container_image_ref]
if image_info.repo_digest:
self.set_container_image(entity, image_info.repo_digest)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor nit: this is difficult to read, maybe a few comments about what each loop does would help ..

basically we are building a list of digests and only setting the container image iff the config value is a label (and not an existing digest).


try:

self.convert_tags_to_repo_digest()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: a comment here that the digest only changes iff the container was set to ref the image by label

having this routine in the serve() thread implies that the digest is always changes which is not how it's actually implemented

for opt in config:
if opt['name'] == 'container_image':
image_settings[opt['section']] = opt['value']
image_settings = self.get_distinct_container_image_settings()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

upgrade always converts from the supplied image to a new digest 👍

@varshar16
Copy link
Contributor

varshar16 commented Sep 7, 2020

There is something wrong with ceph orch ps ?

ceph orch ps is technically correct: the image is correct. It's just shown as not using the repo digest

Copy link
Contributor

@varshar16 varshar16 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Requires follow-up pr to address multiple issues found and test needs to be modified to check if images use repo digest.

@sebastian-philipp
Copy link
Contributor Author

follow up issue: https://tracker.ceph.com/issues/47332

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants