Skip to content

cephadm: include service_name in unit.meta file#39644

Merged
sebastian-philipp merged 6 commits intoceph:masterfrom
liewegas:cephadm-service-id
Feb 26, 2021
Merged

cephadm: include service_name in unit.meta file#39644
sebastian-philipp merged 6 commits intoceph:masterfrom
liewegas:cephadm-service-id

Conversation

@liewegas
Copy link
Member

Fall back to the old code to infer this when it is not present.

Checklist

  • References tracker ticket
  • Updates documentation if necessary
  • Includes tests for new functionality or reproducer for bug

Show available Jenkins commands
  • jenkins retest this please
  • jenkins test classic perf
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test api
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox

Report of container memory usage

Signed-off-by: Sage Weil <sage@newdream.net>
Set a limit on the pod.  Pass both request and limit as
POD_MEMORY_REQUEST and POD_MEMORY_LIMIT, for consistency with Rook.

Store the request and limit in a new unit.meta file, stored next to
unit.run.

Report everything in unit.meta with 'ls' result.

Signed-off-by: Sage Weil <sage@newdream.net>
Keep this in our cached inventory.

Signed-off-by: Sage Weil <sage@newdream.net>
@sebastian-philipp
Copy link
Contributor

@liewegas liewegas force-pushed the cephadm-service-id branch 3 times, most recently from fc4291e to efcedba Compare February 25, 2021 18:59
@sebastian-philipp
Copy link
Contributor

wondering why this wasn't caught by mypy. could have saved a whole Teuthology run

2021-02-25T18:50:33.342 INFO:tasks.workunit.client.0.smithi136.stderr:Traceback (most recent call last):
2021-02-25T18:50:33.343 INFO:tasks.workunit.client.0.smithi136.stderr:  File "/tmp/tmp.6A8NiRVFq6/cephadm", line 7811, in <module>
2021-02-25T18:50:33.343 INFO:tasks.workunit.client.0.smithi136.stderr:    main()
2021-02-25T18:50:33.343 INFO:tasks.workunit.client.0.smithi136.stderr:  File "/tmp/tmp.6A8NiRVFq6/cephadm", line 7800, in main
2021-02-25T18:50:33.343 INFO:tasks.workunit.client.0.smithi136.stderr:    r = ctx.func(ctx)
2021-02-25T18:50:33.344 INFO:tasks.workunit.client.0.smithi136.stderr:  File "/tmp/tmp.6A8NiRVFq6/cephadm", line 1689, in _default_image
2021-02-25T18:50:33.344 INFO:tasks.workunit.client.0.smithi136.stderr:    return func(ctx)
2021-02-25T18:50:33.344 INFO:tasks.workunit.client.0.smithi136.stderr:  File "/tmp/tmp.6A8NiRVFq6/cephadm", line 4751, in command_adopt
2021-02-25T18:50:33.344 INFO:tasks.workunit.client.0.smithi136.stderr:    command_adopt_ceph(ctx, daemon_type, daemon_id, fsid);
2021-02-25T18:50:33.345 INFO:tasks.workunit.client.0.smithi136.stderr:  File "/tmp/tmp.6A8NiRVFq6/cephadm", line 4959, in command_adopt_ceph
2021-02-25T18:50:33.345 INFO:tasks.workunit.client.0.smithi136.stderr:    osd_fsid=osd_fsid)
2021-02-25T18:50:33.345 INFO:tasks.workunit.client.0.smithi136.stderr:  File "/tmp/tmp.6A8NiRVFq6/cephadm", line 2640, in deploy_daemon_units
2021-02-25T18:50:33.345 INFO:tasks.workunit.client.0.smithi136.stderr:    if ctx.meta_json:
2021-02-25T18:50:33.346 INFO:tasks.workunit.client.0.smithi136.stderr:  File "/tmp/tmp.6A8NiRVFq6/cephadm", line 158, in __getattr__
2021-02-25T18:50:33.346 INFO:tasks.workunit.client.0.smithi136.stderr:    return super().__getattribute__(name)
2021-02-25T18:50:33.346 INFO:tasks.workunit.client.0.smithi136.stderr:AttributeError: 'CephadmContext' object has no attribute 'meta_json'

e.g., --meta-json '{"foo": "bar"}'

Signed-off-by: Sage Weil <sage@newdream.net>
Inferring service_name from the daemon name is error-prone.

Fixes: https://tracker.ceph.com/issues/46219
Signed-off-by: Sage Weil <sage@newdream.net>
This is slightly gross, but we need ctx.meta_json for the bootstrap case,
which deploys a mon and mgr.

Signed-off-by: Sage Weil <sage@newdream.net>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants