Skip to content

cephadm: add support for seastore#67274

Merged
shraddhaag merged 3 commits intoceph:mainfrom
shraddhaag:wip-shraddhaag-cephadm-seastore-support
Feb 20, 2026
Merged

cephadm: add support for seastore#67274
shraddhaag merged 3 commits intoceph:mainfrom
shraddhaag:wip-shraddhaag-cephadm-seastore-support

Conversation

@shraddhaag
Copy link
Contributor

@shraddhaag shraddhaag commented Feb 9, 2026

Description

This PR adds support to deloy crimson OSDs with seastore objectstore using cephadm. (follow up to #66811)

Usage

Was able to successfully deploy a cluster with crimson OSDs and seastore objectstore using 2 methods:

  1. --objectstore seastore. The full command used:
ceph orch apply osd --all-available-devices --osd-type crimson --objectstore seastore
  1. Advanced OSD Service Specifications. The spec I used:
service_type: osd
service_id: osd_crimson_seastore
placement:
  host_pattern: '*'
spec:
  objectstore: seastore
  osd_type: crimson
  data_devices:
    all: true
The logs for the successful OSD boot process:
INFO  2026-02-10 12:06:23,298 [shard 0:main] osd - OSD::main: passed objectstore is seastore
WARN  2026-02-10 12:06:23,302 [shard 0:main] osd - _get_class not permitted to load lua
WARN  2026-02-10 12:06:23,305 [shard 0:main] osd - _get_class not permitted to load sdk
WARN  2026-02-10 12:06:23,305 [shard 0:main] osd - _load_class could not open class /usr/lib64/rados-classes/libcls_sem_set.so (dlopen failed): /usr/lib64/rados-classes/libcls_sem_set.so: undefined symbol: _ZN4ceph15from_error_codeEN5boost6system10error_codeE
WARN  2026-02-10 12:06:23,305 [shard 0:main] osd - OSD::OSD: warning: got an error loading one or more classes: (5) Input/output error
INFO  2026-02-10 12:06:23,305 [shard 0:main] osd - OSD::OSD: nonce is 3928823083
INFO  2026-02-10 12:06:23,305 [shard 0:main] osd - populating config from monitor
DEBUG 2026-02-10 12:06:23,315 [shard 0:main] osd - operator(): got config from monitor, fsid adaeded6-0672-11f1-8a62-5254002e89c6
DEBUG 2026-02-10 12:06:23,316 [shard 0:main] osd - OSD::main: running mkfs
DEBUG 2026-02-10 12:06:23,316 [shard 0:main] osd - OSD::mkfs: starting store mkfs
DEBUG 2026-02-10 12:06:23,316 [shard 0:main] osd - OSD::mkfs: calling store mkfs
DEBUG 2026-02-10 12:06:23,423 [shard 0:main] osd - OSD::mkfs: mounting store mkfs
DEBUG 2026-02-10 12:06:23,459 [shard 0:main] osd - OSD::open_or_create_meta_coll: 
DEBUG 2026-02-10 12:06:23,460 [shard 0:main] osd - OSD::open_or_create_meta_coll: creating new metadata collection
DEBUG 2026-02-10 12:06:23,460 [shard 0:main] osd - OSD::_write_superblock: try loading existing superblock
DEBUG 2026-02-10 12:06:23,460 [shard 0:main] osd - OSDMeta::load_superblock: 
INFO  2026-02-10 12:06:23,460 [shard 0:main] osd - OSD::_write_superblock: writing superblock cluster_fsid adaeded6-0672-11f1-8a62-5254002e89c6 osd_fsid be51e8dd-b1a9-4390-8c02-668b2554cc15
DEBUG 2026-02-10 12:06:23,460 [shard 0:main] osd - OSD::_write_superblock: do_transaction: create meta collection and store superblock
INFO  2026-02-10 12:06:23,469 [shard 0:main] osd - OSD::mkfs: created object store /var/lib/ceph/osd/ceph-1/ for osd.1 fsid adaeded6-0672-11f1-8a62-5254002e89c6
DEBUG 2026-02-10 12:06:23,482 [shard 0:main] osd - OSD::main: exiting, mkkey 0, mkfs 1
INFO  2026-02-10 12:06:32,276 [shard 0:main] osd - OSD::main: passed objectstore is seastore
WARN  2026-02-10 12:06:32,279 [shard 0:main] osd - _get_class not permitted to load lua
WARN  2026-02-10 12:06:32,282 [shard 0:main] osd - _get_class not permitted to load sdk
WARN  2026-02-10 12:06:32,282 [shard 0:main] osd - _load_class could not open class /usr/lib64/rados-classes/libcls_sem_set.so (dlopen failed): /usr/lib64/rados-classes/libcls_sem_set.so: undefined symbol: _ZN4ceph15from_error_codeEN5boost6system10error_codeE
WARN  2026-02-10 12:06:32,282 [shard 0:main] osd - OSD::OSD: warning: got an error loading one or more classes: (5) Input/output error
INFO  2026-02-10 12:06:32,282 [shard 0:main] osd - OSD::OSD: nonce is 561608711
INFO  2026-02-10 12:06:32,282 [shard 0:main] osd - populating config from monitor
DEBUG 2026-02-10 12:06:32,294 [shard 0:main] osd - operator(): got config from monitor, fsid adaeded6-0672-11f1-8a62-5254002e89c6
DEBUG 2026-02-10 12:06:32,294 [shard 0:main] osd - OSD::main: starting OSD services
INFO  2026-02-10 12:06:32,294 [shard 0:main] osd - OSD::start: seastar::smp::count 1
DEBUG 2026-02-10 12:06:32,294 [shard 0:main] osd - OSD::start: starting store
DEBUG 2026-02-10 12:06:32,295 [shard 0:main] osd - OSD::start: mounting store
DEBUG 2026-02-10 12:06:32,356 [shard 0:main] osd - OSD::start: open metadata collection
DEBUG 2026-02-10 12:06:32,356 [shard 0:main] osd - OSD::open_meta_coll: opening metadata collection
DEBUG 2026-02-10 12:06:32,356 [shard 0:main] osd - OSD::open_meta_coll: registering metadata collection
DEBUG 2026-02-10 12:06:32,356 [shard 0:main] osd - OSD::start: loading superblock
DEBUG 2026-02-10 12:06:32,356 [shard 0:main] osd - OSDMeta::load_superblock: 
DEBUG 2026-02-10 12:06:32,356 [shard 0:main] osd - OSDMeta::load_superblock: successfully read superblock
DEBUG 2026-02-10 12:06:32,356 [shard 0:main] osd - OSDMeta::load_superblock: decoding superblock bufferlist
DEBUG 2026-02-10 12:06:32,357 [shard 0:main] osd - OSDSingletonState::get_local_map: osdmap.0 found in cache
DEBUG 2026-02-10 12:06:32,357 [shard 0:main] osd - OSD::start: loading PGs
WARN  2026-02-10 12:06:32,357 [shard 0:main] osd - ignoring unrecognized collection: meta
INFO  2026-02-10 12:06:32,357 [shard 0:main] osd - osd.cc:pick_addresses: picked address v2:192.168.100.100:0/0
INFO  2026-02-10 12:06:32,357 [shard 0:main] osd - osd.cc:pick_addresses: picked address v2:192.168.100.100:0/0
DEBUG 2026-02-10 12:06:32,358 [shard 0:main] osd - OSD::start: starting mon and mgr clients
DEBUG 2026-02-10 12:06:32,362 [shard 0:main] osd - OSD::start: adding to crush
INFO  2026-02-10 12:06:32,362 [shard 0:main] osd - OSD::_add_me_to_crush: crush location is "host=ceph-node-0", "root=default"
INFO  2026-02-10 12:06:33,272 [shard 0:main] osd - OSD::_add_me_to_crush: added to crush: create-or-move updating item name 'osd.1' weight 0.0195 at location {host=ceph-node-0,root=default} to crush map
INFO  2026-02-10 12:06:33,273 [shard 0:main] osd - OSD::_add_device_class: device_class is SSD 
INFO  2026-02-10 12:06:34,300 [shard 0:main] osd - OSD::_add_device_class: device_class was set: set osd(s) 1 to class 'SSD'
INFO  2026-02-10 12:06:34,300 [shard 0:main] osd - OperationThrottler::start: Starting OperationThrottler background task
INFO  2026-02-10 12:06:34,301 [shard 0:main] osd - osd.cc:pick_addresses: picked address v2:192.168.100.100:0/0
INFO  2026-02-10 12:06:34,302 [shard 0:main] osd - osd.cc:pick_addresses: picked address v2:192.168.100.100:0/0
INFO  2026-02-10 12:06:34,302 [shard 0:main] osd - heartbeat: start front_addrs=v2:192.168.100.100:0/0, back_addrs=v2:192.168.100.100:0/0
DEBUG 2026-02-10 12:06:34,306 [shard 0:main] osd - OSD::start: starting boot
INFO  2026-02-10 12:06:34,321 [shard 0:main] osd - OSD::_handle_osd_map: osd_map(8..8 src has 1..8) v4
INFO  2026-02-10 12:06:34,321 [shard 0:main] osd - OSD::_handle_osd_map:  epochs [8..8], i have 0, src has [1..8]
DEBUG 2026-02-10 12:06:34,321 [shard 0:main] osd - OSD::_handle_osd_map: superblock cluster_osdmap_trim_lower_bound new epoch is: 1
INFO  2026-02-10 12:06:34,321 [shard 0:main] osd - OSD::_handle_osd_map: message skips epochs 1..7
INFO  2026-02-10 12:06:34,321 [shard 0:main] osd - OSDSingletonState::osdmap_subscribe: epoch 1
INFO  2026-02-10 12:06:34,322 [shard 0:main] osd - OSD::_preboot: osd.1
INFO  2026-02-10 12:06:34,322 [shard 0:main] osd - OSD::_preboot: waiting for initial osdmap
INFO  2026-02-10 12:06:34,322 [shard 0:main] osd - OSDSingletonState::osdmap_subscribe: epoch 1
INFO  2026-02-10 12:06:34,322 [shard 0:main] osd - OSD::main: crimson startup completed

Docs

Fixes: https://tracker.ceph.com/issues/74616

Checklist

  • Tracker (select at least one)
    • References tracker ticket
    • Very recent bug; references commit where it was introduced
    • New feature (ticket optional)
    • Doc update (no ticket needed)
    • Code cleanup (no ticket needed)
  • Component impact
    • Affects Dashboard, opened tracker ticket
    • Affects Orchestrator, opened tracker ticket
    • No impact that needs to be tracked
  • Documentation (select at least one)
    • Updates relevant documentation
    • No doc update is appropriate
  • Tests (select at least one)
Show available Jenkins commands

You must only issue one Jenkins command per-comment. Jenkins does not understand
comments with more than one command.

@shraddhaag shraddhaag moved this to In Progress in Crimson Feb 9, 2026
@shraddhaag shraddhaag force-pushed the wip-shraddhaag-cephadm-seastore-support branch from 965a70e to 8956826 Compare February 10, 2026 11:57
Copy link
Contributor

@Matan-B Matan-B left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm!

@Matan-B Matan-B requested review from adk3798 and guits February 11, 2026 09:45
@Matan-B
Copy link
Contributor

Matan-B commented Feb 11, 2026

Pr description is great! Let's mention this in the docs as well (and mention the non-supported raw method)

@shraddhaag shraddhaag force-pushed the wip-shraddhaag-cephadm-seastore-support branch 2 times, most recently from 1d58cd9 to 68aa1cc Compare February 12, 2026 05:21
@shraddhaag shraddhaag force-pushed the wip-shraddhaag-cephadm-seastore-support branch 2 times, most recently from 35d2781 to 7f285e7 Compare February 12, 2026 09:26
@shraddhaag
Copy link
Contributor Author

jenkins test make check arm64

@shraddhaag shraddhaag force-pushed the wip-shraddhaag-cephadm-seastore-support branch from 7f285e7 to 5475740 Compare February 12, 2026 12:01
@shraddhaag shraddhaag changed the title [WIP] cephadm: add support for seastore cephadm: add support for seastore Feb 12, 2026
@shraddhaag shraddhaag marked this pull request as ready for review February 12, 2026 12:27
@shraddhaag shraddhaag moved this from In Progress to Awaits review in Crimson Feb 13, 2026
Copy link
Contributor

@Matan-B Matan-B left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm! Great work!

@Matan-B Matan-B moved this from Awaits review to Tested in Crimson Feb 16, 2026
@Matan-B
Copy link
Contributor

Matan-B commented Feb 16, 2026

jenkins retest this please

@Matan-B
Copy link
Contributor

Matan-B commented Feb 16, 2026

We should get this merged to allow early SeaStore deployments!

@shraddhaag
Copy link
Contributor Author

We should get this merged to allow early SeaStore deployments!

Waiting for a review from @adk3798 and we should be good to go here!

This commit adds support for deploying seastore objectstore with
cephdm. This can be done in two ways:

1. using OSD spec file, we can set the objectstore argument to
seastore. eg -
```
service_type: osd
service_id: osd_crimson_seastore
placement:
  host_pattern: '*'
spec:
  objectstore: seastore
  osd_type: crimson
  data_devices:
    all: true
```

2. using --objectstore flag with ceph orch osd deploy. sample cmd:
```
ceph orch apply osd --all-available-devices --osd-type crimson --objectstore seastore
```

Fixes: https://tracker.ceph.com/issues/74616
Signed-off-by: Shraddha Agrawal <shraddha.agrawal000@gmail.com>
This commits adds the following tests:
1. cephadm: JSON roundtrip of a spec with objecstore=seastore.
2. cephadm: validation checks for objecstore values.
3. cephadm to ceph-volume: cmd checks if objecstore=seastore is set.

Signed-off-by: Shraddha Agrawal <shraddha.agrawal000@gmail.com>
@shraddhaag shraddhaag force-pushed the wip-shraddhaag-cephadm-seastore-support branch from 5475740 to 941b42c Compare February 17, 2026 14:24
@Matan-B
Copy link
Contributor

Matan-B commented Feb 18, 2026

jenkins test make check

@shraddhaag
Copy link
Contributor Author

jenkins test make check

This commit updates the crimson user facing docs to add
instructions on how to deploy a crimson OSD with seastore
objectstore.

Signed-off-by: Shraddha Agrawal <shraddha.agrawal000@gmail.com>
@shraddhaag shraddhaag force-pushed the wip-shraddhaag-cephadm-seastore-support branch from 941b42c to 59d0e2e Compare February 19, 2026 11:52
Copy link
Contributor

@adk3798 adk3798 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple nitpicks, generally LGTM

self.data_directories = data_directories

#: ``filestore`` or ``bluestore``
#: ``filestore`` or ``bluestore`` or ``seastore``
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume we can remove filestore from this comment at this point. It looks like the validation only allowed bluestore before this PR anyway

"`all` is only allowed for data_devices")

if self.objectstore not in ('bluestore'):
if self.objectstore not in ['bluestore', 'seastore']:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what happens currently if users set the osd type to crimson but don't set the objectstore field at all? I'd think we'd want to just have the objectstore field default to bluestore for "classic" OSDs and seatore for crimson ones

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want bluestore as the default store for both classic and crimson OSDs at the moment.

@shraddhaag
Copy link
Contributor Author

jenkins test docs

@shraddhaag shraddhaag merged commit c059828 into ceph:main Feb 20, 2026
13 of 15 checks passed
@github-actions
Copy link

This is an automated message by src/script/redmine-upkeep.py.

I have resolved the following tracker ticket due to the merge of this PR:

No backports are pending for the ticket. If this is incorrect, please update the tracker
ticket and reset to Pending Backport state.

Update Log: https://github.com/ceph/ceph/actions/runs/22213542155

@Matan-B Matan-B moved this from Tested to Merged (Main) in Crimson Feb 22, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: Merged (Main)

Development

Successfully merging this pull request may close these issues.

3 participants