Skip to content

osd/ReplicatedBackend: Rename Push/Pull Info#49546

Merged
ljflores merged 5 commits intoceph:mainfrom
Matan-B:wip-matanb-pull-push-naming
Mar 9, 2023
Merged

osd/ReplicatedBackend: Rename Push/Pull Info#49546
ljflores merged 5 commits intoceph:mainfrom
Matan-B:wip-matanb-pull-push-naming

Conversation

@Matan-B
Copy link
Contributor

@Matan-B Matan-B commented Dec 22, 2022

(The commits here are separated from Crimson's counterpart wip around prep_push with similar changes)

pi has excessive initials meanings around overlapping files.

pool info, past intervals, pull info, push info, pg info, peer info, PGState instance.

Checklist

  • Tracker (select at least one)
    • References tracker ticket
    • Very recent bug; references commit where it was introduced
    • New feature (ticket optional)
    • Doc update (no ticket needed)
    • Code cleanup (no ticket needed)
  • Component impact
    • Affects Dashboard, opened tracker ticket
    • Affects Orchestrator, opened tracker ticket
    • No impact that needs to be tracked
  • Documentation (select at least one)
    • Updates relevant documentation
    • No doc update is appropriate
  • Tests (select at least one)
Show available Jenkins commands
  • jenkins retest this please
  • jenkins test classic perf
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test dashboard cephadm
  • jenkins test api
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox
  • jenkins test windows

Signed-off-by: Matan Breizman <mbreizma@redhat.com>
'pi' has excessive initials meanings.

pool info, past intervals, pull info,
push info, pg info, peer info, PGState instance.

Signed-off-by: Matan Breizman <mbreizma@redhat.com>
@Matan-B Matan-B requested a review from a team as a code owner December 22, 2022 13:11
@github-actions github-actions bot added the core label Dec 22, 2022
Copy link
Contributor

@ronen-fr ronen-fr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with some minor comments

@rzarzynski
Copy link
Contributor

The following tests FAILED:
	 34 - run-rbd-unit-tests-127.sh (Timeout)

Rather no-worries.

@Matan-B Matan-B force-pushed the wip-matanb-pull-push-naming branch from 70b6180 to f29c15a Compare December 25, 2022 11:00
Signed-off-by: Matan Breizman <mbreizma@redhat.com>
Signed-off-by: Matan Breizman <mbreizma@redhat.com>
'pi' has excessive initials meanings.

pool info, past intervals, pull info,
push info, pg info, peer info, PGState instance.

Signed-off-by: Matan Breizman <mbreizma@redhat.com>
Copy link
Contributor

@athanatos athanatos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs qa run

@ljflores
Copy link
Member

Hey @Matan-B, can you see if these errors may be related to your PR? There were two other "OSD" PRs in this batch, so checking with them as well.

https://pulpito.ceph.com/?branch=wip-yuri11-testing-2023-02-06-1424

/a/yuriw-2023-02-13_22:35:30-rados-wip-yuri11-testing-2023-02-06-1424-distro-default-smithi/7173561

2023-02-15T15:40:18.498 INFO:teuthology.orchestra.run.smithi016.stderr:Inferring config /var/lib/ceph/4a325d08-ad45-11ed-9ae5-001a4aab830c/mon.a/config
2023-02-15T15:40:18.721 INFO:journalctl@ceph.mon.a.smithi016.stdout:Feb 15 15:40:18 smithi016 ceph-mon[105234]: purged_snaps scrub starts
2023-02-15T15:40:18.721 INFO:journalctl@ceph.mon.a.smithi016.stdout:Feb 15 15:40:18 smithi016 ceph-mon[105234]: purged_snaps scrub ok
2023-02-15T15:40:18.722 INFO:journalctl@ceph.mon.a.smithi016.stdout:Feb 15 15:40:18 smithi016 ceph-mon[105234]: osdmap e139: 12 total, 12 up, 12 in
2023-02-15T15:40:18.776 INFO:journalctl@ceph.mon.b.smithi082.stdout:Feb 15 15:40:18 smithi082 ceph-mon[106992]: purged_snaps scrub starts
2023-02-15T15:40:18.777 INFO:journalctl@ceph.mon.b.smithi082.stdout:Feb 15 15:40:18 smithi082 ceph-mon[106992]: purged_snaps scrub ok
2023-02-15T15:40:18.777 INFO:journalctl@ceph.mon.b.smithi082.stdout:Feb 15 15:40:18 smithi082 ceph-mon[106992]: osdmap e139: 12 total, 12 up, 12 in
2023-02-15T15:40:18.833 INFO:journalctl@ceph.mon.c.smithi153.stdout:Feb 15 15:40:18 smithi153 ceph-mon[107435]: purged_snaps scrub starts
2023-02-15T15:40:18.834 INFO:journalctl@ceph.mon.c.smithi153.stdout:Feb 15 15:40:18 smithi153 ceph-mon[107435]: purged_snaps scrub ok
2023-02-15T15:40:18.834 INFO:journalctl@ceph.mon.c.smithi153.stdout:Feb 15 15:40:18 smithi153 ceph-mon[107435]: osdmap e139: 12 total, 12 up, 12 in
2023-02-15T15:40:19.776 INFO:journalctl@ceph.mon.b.smithi082.stdout:Feb 15 15:40:19 smithi082 ceph-mon[106992]: pgmap v451: 1 pgs: 1 active+clean+inconsistent; 577 KiB data, 79 MiB used, 1.0 TiB / 1.0 TiB avail
2023-02-15T15:40:19.833 INFO:journalctl@ceph.mon.c.smithi153.stdout:Feb 15 15:40:19 smithi153 ceph-mon[107435]: pgmap v451: 1 pgs: 1 active+clean+inconsistent; 577 KiB data, 79 MiB used, 1.0 TiB / 1.0 TiB avail
2023-02-15T15:40:19.891 INFO:journalctl@ceph.mon.a.smithi016.stdout:Feb 15 15:40:19 smithi016 ceph-mon[105234]: pgmap v451: 1 pgs: 1 active+clean+inconsistent; 577 KiB data, 79 MiB used, 1.0 TiB / 1.0 TiB avail
2023-02-15T15:40:19.973 INFO:teuthology.orchestra.run.smithi016.stdout:
2023-02-15T15:40:19.974 INFO:teuthology.orchestra.run.smithi016.stdout:{"status":"HEALTH_ERR","checks":{"OSD_SCRUB_ERRORS":{"severity":"HEALTH_ERR","summary":{"message":"1 scrub errors","count":1},"muted":false},"PG_DAMAGED":{"severity":"HEALTH_ERR","summary":{"message":"Possible data damage: 1 pg inconsistent","count":1},"muted":false}},"mutes":[]}
2023-02-15T15:40:20.351 INFO:tasks.cephadm:Teardown begin
2023-02-15T15:40:20.352 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_teuthology_a5875b2da3506f26286d023ce2de3e75c0eb806d/teuthology/contextutil.py", line 33, in nested
    yield vars
  File "/home/teuthworker/src/github.com_ceph_ceph-c_af6f31a11de6c2d1098be151e08b0efe0779c691/qa/tasks/cephadm.py", line 1668, in task
    healthy(ctx=ctx, config=config)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_af6f31a11de6c2d1098be151e08b0efe0779c691/qa/tasks/ceph.py", line 1472, in healthy
    manager.wait_until_healthy(timeout=300)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_af6f31a11de6c2d1098be151e08b0efe0779c691/qa/tasks/ceph_manager.py", line 3181, in wait_until_healthy
    assert time.time() - start < timeout, \
AssertionError: timeout expired in wait_until_healthy

/a/yuriw-2023-02-13_22:35:30-rados-wip-yuri11-testing-2023-02-06-1424-distro-default-smithi/7173564

2023-02-15T15:51:52.792 INFO:tasks.ec_inconsistent_hinfo.ceph_manager:{'pgid': '2.0', 'version': "25'22624", 'reported_seq': 53217, 'reported_epoch': 33, 'state': 'active+backfill_unfound+degraded+remapped', 'last_fresh': '2023-02-15T15:48:23.678966+0000', 'last_change': '2023-02-15T15:32:23.557919+0000', 'last_active': '2023-02-15T15:48:23.678966+0000', 'last_peered': '2023-02-15T15:48:23.678966+0000', 'last_clean': '2023-02-15T15:30:25.565389+0000', 'last_became_active': '2023-02-15T15:30:30.637744+0000', 'last_became_peered': '2023-02-15T15:30:30.637744+0000', 'last_unstale': '2023-02-15T15:48:23.678966+0000', 'last_undegraded': '2023-02-15T15:31:38.456208+0000', 'last_fullsized': '2023-02-15T15:48:23.678966+0000', 'mapping_epoch': 32, 'log_start': "25'22591", 'ondisk_log_start': "25'22591", 'created': 17, 'last_epoch_clean': 30, 'parent': '0.0', 'parent_split_bits': 0, 'last_scrub': "20'2", 'last_scrub_stamp': '2023-02-15T15:29:32.056682+0000', 'last_deep_scrub': "20'2", 'last_deep_scrub_stamp': '2023-02-15T15:29:32.056682+0000', 'last_clean_scrub_stamp': '2023-02-15T15:29:32.056682+0000', 'objects_scrubbed': 0, 'log_size': 33, 'log_dups_size': 3000, 'ondisk_log_size': 33, 'stats_invalid': False, 'dirty_stats_invalid': False, 'omap_stats_invalid': False, 'hitset_stats_invalid': False, 'hitset_bytes_stats_invalid': False, 'pin_stats_invalid': False, 'manifest_stats_invalid': False, 'snaptrimq_len': 0, 'last_scrub_duration': 1, 'scrub_schedule': 'periodic scrub scheduled @ 2023-02-17T03:05:43.521380+0000', 'scrub_duration': 0.004125126, 'objects_trimmed': 0, 'snaptrim_duration': 0, 'stat_sum': {'num_bytes': 185311256, 'num_objects': 22622, 'num_object_clones': 0, 'num_object_copies': 67866, 'num_objects_missing_on_primary': 1, 'num_objects_missing': 0, 'num_objects_degraded': 2, 'num_objects_misplaced': 7379, 'num_objects_unfound': 1, 'num_objects_dirty': 22622, 'num_whiteouts': 0, 'num_read': 0, 'num_read_kb': 0, 'num_write': 22622, 'num_write_kb': 180969, 'num_
scrub_errors': 0, 'num_shallow_scrub_errors': 0, 'num_deep_scrub_errors': 0, 'num_objects_recovered': 15241, 'num_bytes_recovered': 124851394, 'num_keys_recovered': 0, 'num_objects_omap': 0, 'num_objects_hit_set_archive': 0, 'num_bytes_hit_set_archive': 0, 'num_flush': 0, 'num_flush_kb': 0, 'num_evict': 0, 'num_evict_kb': 0, 'num_promote': 0, 'num_flush_mode_high': 0, 'num_flush_mode_low': 0, 'num_evict_mode_some': 0, 'num_evict_mode_full': 0, 'num_objects_pinned': 0, 'num_legacy_snapsets': 0, 'num_large_omap_objects': 0, 'num_objects_manifest': 0, 'num_omap_bytes': 0, 'num_omap_keys': 0, 'num_objects_repaired': 0}, 'up': [3, 0, 2], 'acting': [3, 1, 2], 'avail_no_missing': ['2(2)'], 'object_location_counts': [{'shards': '1(1),2(2),3(0)', 'objects': 22621}, {'shards': '2(2)', 'objects': 1}], 'blocked_by': [], 'up_primary': 3, 'acting_primary': 3, 'purged_snaps': []}
2023-02-15T15:51:52.792 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_teuthology_a5875b2da3506f26286d023ce2de3e75c0eb806d/teuthology/run_tasks.py", line 103, in run_tasks
    manager = run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/src/git.ceph.com_teuthology_a5875b2da3506f26286d023ce2de3e75c0eb806d/teuthology/run_tasks.py", line 82, in run_one_task
    return task(**kwargs)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_af6f31a11de6c2d1098be151e08b0efe0779c691/qa/tasks/ec_inconsistent_hinfo.py", line 152, in task
    manager.wait_for_clean()
  File "/home/teuthworker/src/github.com_ceph_ceph-c_af6f31a11de6c2d1098be151e08b0efe0779c691/qa/tasks/ceph_manager.py", line 2768, in wait_for_clean
    assert time.time() - start < timeout, \
AssertionError: wait_for_clean: failed before timeout expired
2023-02-15T15:51:52.861 ERROR:teuthology.run_tasks: Sentry event: https://sentry.ceph.com/organizations/ceph/?query=effc0f644ac34714a6d7483c4e3a9e4f
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_teuthology_a5875b2da3506f26286d023ce2de3e75c0eb806d/teuthology/run_tasks.py", line 103, in run_tasks
    manager = run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/src/git.ceph.com_teuthology_a5875b2da3506f26286d023ce2de3e75c0eb806d/teuthology/run_tasks.py", line 82, in run_one_task
    return task(**kwargs)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_af6f31a11de6c2d1098be151e08b0efe0779c691/qa/tasks/ec_inconsistent_hinfo.py", line 152, in task
    manager.wait_for_clean()
  File "/home/teuthworker/src/github.com_ceph_ceph-c_af6f31a11de6c2d1098be151e08b0efe0779c691/qa/tasks/ceph_manager.py", line 2768, in wait_for_clean
    assert time.time() - start < timeout, \
AssertionError: wait_for_clean: failed before timeout expired

/a/yuriw-2023-02-13_22:35:30-rados-wip-yuri11-testing-2023-02-06-1424-distro-default-smithi/7173549

2023-02-15T15:57:44.793 INFO:tasks.workunit.client.0.smithi049.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1274: _objectstore_tool_nowait:  _objectstore_tool_nodown td/osd-scrub-repair 0 SOMETHING list-attrs
2023-02-15T15:57:44.793 INFO:tasks.workunit.client.0.smithi049.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1255: _objectstore_tool_nodown:  local dir=td/osd-scrub-repair
2023-02-15T15:57:44.793 INFO:tasks.workunit.client.0.smithi049.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1256: _objectstore_tool_nodown:  shift
2023-02-15T15:57:44.793 INFO:tasks.workunit.client.0.smithi049.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1257: _objectstore_tool_nodown:  local id=0
2023-02-15T15:57:44.793 INFO:tasks.workunit.client.0.smithi049.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1258: _objectstore_tool_nodown:  shift
2023-02-15T15:57:44.793 INFO:tasks.workunit.client.0.smithi049.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1259: _objectstore_tool_nodown:  local osd_data=td/osd-scrub-repair/0
2023-02-15T15:57:44.793 INFO:tasks.workunit.client.0.smithi049.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1262: _objectstore_tool_nodown:  ceph-objectstore-tool --data-path td/osd-scrub-repair/0 SOMETHING list-attrs
2023-02-15T15:57:46.044 INFO:tasks.workunit.client.0.smithi049.stderr:Error getting attr on : 1.0_head,#1:00000000::::head#, (61) No data available
2023-02-15T15:57:46.044 INFO:tasks.workunit.client.0.smithi049.stderr:No object id 'SOMETHING' found or invalid JSON specified
2023-02-15T15:57:46.319 INFO:tasks.workunit.client.0.smithi049.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1263: _objectstore_tool_nodown:  return 1
2023-02-15T15:57:46.319 INFO:tasks.workunit.client.0.smithi049.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1274: _objectstore_tool_nowait:  return 1
2023-02-15T15:57:46.319 INFO:tasks.workunit.client.0.smithi049.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1300: objectstore_tool:  return 1
2023-02-15T15:57:46.319 INFO:tasks.workunit.client.0.smithi049.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh:480: TEST_auto_repair_bluestore_basic:  return 1

/a/yuriw-2023-02-13_22:35:30-rados-wip-yuri11-testing-2023-02-06-1424-distro-default-smithi/7173567

2023-02-15T15:33:57.707 DEBUG:teuthology.orchestra.run.smithi080:> sudo ls /var/lib/ceph/osd/ceph-6/fuse/1.7_head/all
2023-02-15T15:33:57.743 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_teuthology_a5875b2da3506f26286d023ce2de3e75c0eb806d/teuthology/run_tasks.py", line 103, in run_tasks
    manager = run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/src/git.ceph.com_teuthology_a5875b2da3506f26286d023ce2de3e75c0eb806d/teuthology/run_tasks.py", line 82, in run_one_task
    return task(**kwargs)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_af6f31a11de6c2d1098be151e08b0efe0779c691/qa/tasks/scrub_test.py", line 391, in task
    osd_remote, obj_path, obj_name = find_victim_object(ctx, pg, osd)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_af6f31a11de6c2d1098be151e08b0efe0779c691/qa/tasks/scrub_test.py", line 55, in find_victim_object
    objname = osdfilename.split(':')[4]
IndexError: list index out of range

/a/yuriw-2023-02-13_22:35:30-rados-wip-yuri11-testing-2023-02-06-1424-distro-default-smithi/7173572

2023-02-15T18:07:25.878 INFO:tasks.workunit.client.0.smithi107.stdout:2023-02-15T18:07:17.736778+0000 osd.1 (osd.1) 6 : cluster [ERR] 1.0 deep-scrub 1 errors
2023-02-15T18:07:25.878 INFO:tasks.workunit.client.0.smithi107.stdout:2023-02-15T18:07:18.829538+0000 mgr.x (mgr.4100) 108 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 109 KiB data, 48 MiB used, 300 GiB / 300 GiB avail
2023-02-15T18:07:25.879 INFO:tasks.workunit.client.0.smithi107.stdout:2023-02-15T18:07:20.830056+0000 mgr.x (mgr.4100) 109 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 109 KiB data, 48 MiB used, 300 GiB / 300 GiB avail
2023-02-15T18:07:25.879 INFO:tasks.workunit.client.0.smithi107.stdout:2023-02-15T18:07:22.882002+0000 mon.a (mon.0) 450 : cluster [ERR] Health check failed: 1 scrub errors (OSD_SCRUB_ERRORS)
2023-02-15T18:07:25.879 INFO:tasks.workunit.client.0.smithi107.stdout:2023-02-15T18:07:22.882076+0000 mon.a (mon.0) 451 : cluster [ERR] Health check failed: Possible data damage: 1 pg inconsistent (PG_DAMAGED)
2023-02-15T18:07:25.879 INFO:tasks.workunit.client.0.smithi107.stdout:2023-02-15T18:07:22.830625+0000 mgr.x (mgr.4100) 110 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean+inconsistent; 109 KiB data, 48 MiB used, 300 GiB / 300 GiB avail
2023-02-15T18:07:25.879 INFO:tasks.workunit.client.0.smithi107.stdout:2023-02-15T18:07:24.831185+0000 mgr.x (mgr.4100) 111 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean+inconsistent; 109 KiB data, 48 MiB used, 300 GiB / 300 GiB avail
2023-02-15T18:07:25.892 INFO:tasks.workunit.client.0.smithi107.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/pg-split-merge.sh:84: TEST_a_merge_empty:  ceph pg ls
2023-02-15T18:07:26.291 INFO:tasks.workunit.client.0.smithi107.stdout:PG   OBJECTS  DEGRADED  MISPLACED  UNFOUND  BYTES   OMAP_BYTES*  OMAP_KEYS*  LOG  LOG_DUPS  STATE                      SINCE  VERSION  REPORTED  UP         ACTING     SCRUB_STAMP                      DEEP_SCRUB_STAMP                 LAST_SCRUB_DURATION  SCRUB_SCHEDULING
2023-02-15T18:07:26.291 INFO:tasks.workunit.client.0.smithi107.stdout:1.0       11         0          0        0  111309            0           0    0        11  active+clean+inconsistent     8s     23'7    43:108  [1,0,2]p1  [1,0,2]p1  2023-02-15T18:07:17.736806+0000  2023-02-15T18:07:17.736806+0000                    1  periodic scrub scheduled @ 2023-02-16T23:29:18.565994+0000
2023-02-15T18:07:26.291 INFO:tasks.workunit.client.0.smithi107.stdout:
2023-02-15T18:07:26.292 INFO:tasks.workunit.client.0.smithi107.stdout:* NOTE: Omap statistics are gathered during deep scrub and may be inaccurate soon afterwards depending on utilization. See http://docs.ceph.com/en/latest/dev/placement-group/#omap-statistics for further details.
2023-02-15T18:07:26.310 INFO:tasks.workunit.client.0.smithi107.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/pg-split-merge.sh:85: TEST_a_merge_empty:  grep ' active.clean '
2023-02-15T18:07:26.310 INFO:tasks.workunit.client.0.smithi107.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/pg-split-merge.sh:85: TEST_a_merge_empty:  ceph pg ls
2023-02-15T18:07:26.711 INFO:tasks.workunit.client.0.smithi107.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/pg-split-merge.sh:85: TEST_a_merge_empty:  return 1

@Matan-B
Copy link
Contributor Author

Matan-B commented Feb 19, 2023

Hey @Matan-B, can you see if these errors may be related to your PR? There were two other "OSD" PRs in this batch, so checking with them as well.

https://pulpito.ceph.com/?branch=wip-yuri11-testing-2023-02-06-1424

Replied in Trello card.

@ljflores
Copy link
Member

ljflores commented Mar 7, 2023

Asked Yuri to include this in his next batch.

@ljflores
Copy link
Member

ljflores commented Mar 9, 2023

https://pulpito.ceph.com/?branch=wip-yuri4-testing-2023-03-08-1234

Failures, unrelated:
1. https://tracker.ceph.com/issues/58946
2. https://tracker.ceph.com/issues/58560
3. https://tracker.ceph.com/issues/58585
4. https://tracker.ceph.com/issues/55347
5. https://tracker.ceph.com/issues/49287

Details:
1. cephadm: KeyError: 'osdspec_affinity' - Ceph - Orchestrator
2. test_envlibrados_for_rocksdb.sh failed to subscribe to repo - Infrastructure
3. rook: failed to pull kubelet image - Ceph - Orchestrator
4. SELinux Denials during cephadm/workunits/test_cephadm - Ceph - Orchestrator
5. podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope not found - Ceph - Orchestrator

@ljflores ljflores merged commit 8a9ee6f into ceph:main Mar 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants