Project

General

Profile

Actions

Bug #70400

open

"ceph node ls" showing destroyed osds

Added by Nitzan Mordechai about 1 year ago. Updated 5 months ago.

Status:
Pending Backport
Priority:
High
Category:
-
Target version:
-
% Done:

0%

Source:
Backport:
reef, squid, quincy
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Tags (freeform):
backport_processed
Fixed In:
v20.0.0-507-gdf30796ead
Released In:
v20.2.0~842
Upkeep Timestamp:
2025-11-01T01:00:13+00:00

Description

BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2269003

after "ceph orch osd rm ${OSD} --zap --replace"
ceph node ls shows:
[root@rhcs6node1 ~]# ceph node ls {
"mon": {
"rhcs6node1": [
"rhcs6node1"
],
"rhcs6node2": [
"rhcs6node2"
],
"rhcs6node3": [
"rhcs6node3"
]
},
"osd": {
"rhcs6client": [
3
],
"rhcs6node1": [
0
],
"rhcs6node2": [
1
],
"rhcs6node3": [
2
]
},
"mgr": {
"rhcs6node1": [
"rhcs6node1.vojptl"
],
"rhcs6node2": [
"rhcs6node2.rgadnw"
],
"rhcs6node3": [
"rhcs6node3.aquyuo"
]
}
}

unlike "ceph osd tree" that show on status that the osd destroyed:
[root@rhcs6node1 ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.07794 root default
-9 0.01949 host rhcs6client
3 hdd 0.01949 osd.3 up 1.00000 1.00000
-3 0.01949 host rhcs6node1
0 hdd 0.01949 osd.0 up 1.00000 1.00000
-5 0.01949 host rhcs6node2
1 hdd 0.01949 osd.1 up 1.00000 1.00000
-7 0.01949 host rhcs6node3
2 hdd 0.01949 osd.2 destroyed 0 1.00000


Related issues 2 (2 open0 closed)

Copied to RADOS - Backport #70495: reef: "ceph node ls" showing destroyed osdsIn ProgressNitzan MordechaiActions
Copied to RADOS - Backport #70496: squid: "ceph node ls" showing destroyed osdsIn ProgressNitzan MordechaiActions
Actions #1

Updated by Nitzan Mordechai about 1 year ago

  • Status changed from New to In Progress
  • Priority changed from Normal to High
  • Backport set to reef, squid
Actions #2

Updated by Nitzan Mordechai about 1 year ago

  • Status changed from In Progress to Fix Under Review
  • Pull request ID set to 62243
Actions #3

Updated by Nitzan Mordechai about 1 year ago

  • Status changed from Fix Under Review to Pending Backport
Actions #4

Updated by Upkeep Bot about 1 year ago

  • Copied to Backport #70495: reef: "ceph node ls" showing destroyed osds added
Actions #5

Updated by Upkeep Bot about 1 year ago

  • Copied to Backport #70496: squid: "ceph node ls" showing destroyed osds added
Actions #6

Updated by Upkeep Bot about 1 year ago

  • Tags (freeform) set to backport_processed
Actions #7

Updated by Nitzan Mordechai about 1 year ago

  • Backport changed from reef, squid to reef, squid, quincy
Actions #8

Updated by Frédéric NASS 11 months ago

For the record, this tracker and associated PR resolved the issue where destroyed OSDs were incorrectly appearing as stray daemons in Ceph status reports (https://tracker.ceph.com/issues/67018).

Actions #9

Updated by Upkeep Bot 9 months ago

  • Merge Commit set to df30796ead7f3d85bb55309f9399841911377a22
  • Fixed In set to v20.0.0-507-gdf30796ead7
  • Upkeep Timestamp set to 2025-07-08T18:07:05+00:00
Actions #10

Updated by Upkeep Bot 8 months ago

  • Fixed In changed from v20.0.0-507-gdf30796ead7 to v20.0.0-507-gdf30796ead7f
  • Upkeep Timestamp changed from 2025-07-08T18:07:05+00:00 to 2025-07-14T15:21:31+00:00
Actions #11

Updated by Upkeep Bot 8 months ago

  • Fixed In changed from v20.0.0-507-gdf30796ead7f to v20.0.0-507-gdf30796ead
  • Upkeep Timestamp changed from 2025-07-14T15:21:31+00:00 to 2025-07-14T20:46:03+00:00
Actions #12

Updated by Upkeep Bot 5 months ago

  • Released In set to v20.2.0~842
  • Upkeep Timestamp changed from 2025-07-14T20:46:03+00:00 to 2025-11-01T01:00:13+00:00
Actions

Also available in: Atom PDF