Releases: rook/rook
v1.19.5
Improvements
Rook v1.19.5 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
- security: Grant scc to rook-ceph-nvmeof service account (#17432, @OdedViner)
- core: Remove newlines from liveness probe scripts (#17420, @sp98)
- csi: Add helm ownership annotation to csi resources (#17289, @subhamkrai)
- osd: Fix CRUSH device class not applied during OSD re-discovery (#17228, @ormandj)
- mds: Fix incorrect behaviour for CephFS when no active standby (#17373, @degorenko)
- doc: Fix out of date references to default PgHealthyRegex (#17376, @elias-dbx)
- build(deps): Bump github.com/go-jose/go-jose/v4 from 4.1.3 to 4.1.4 (#17300, @dependabot[bot])
- mon: Prevent mon drains more reliably when mons are down (#17359, @travisn)
- helm: Set ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS from chart values (#17352, @taraasrita10)
- csi: Swapped provisionerPriorityClassName with pluginPriorityClassName (#17361, @sonnysasaka)
- csi: Add 'CSIMetadataRadosNamespace' parameter to CephFilesystemSubVolumeGroup (#17351, @ein-stein-chen)
v1.19.4
Improvements
Rook v1.19.4 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
- object: Fix CephObjectStoreUser support for setting Capabilities (#17149, @hjk068)
- mgr: Add missing RBAC role for ceph-mgr in secondary clusters (#17324, @gonzolino)
- deploy/examples: Add standalone cleanup-job.yaml (#17262, @mateenali66)
- osd: Add logging when detecting osd versions (#17320, @travisn)
- build: Update base image for Rook operator to v20.2.1 (#16836, @subhamkrai)
- cosi: Update default COSI sidecar image version (#17204, @takirala)
- ceph: Add labels support to CephObjectStore RGW service (#17238, @majiayu000)
- osd: Zap disks for forceful OSD installation (#17225, @sp98)
- helm: Update csi operator to v0.6.0 (#17244, @travisn)
v1.19.3
Improvements
Rook v1.19.3 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
- csi: Use ceph-csi-operator to deploy Ceph-CSI/NVMe-oF (#17154, @nixpanic)
- csi: Update ceph-csi image to v3.16.2 (#17184, @black-dragon74)
- csi: Update CSI sidecars to latest versions available (#17119, @iPraveenParihar)
- pool: Clean up erasure code profile on pool deletion (#17208, @OdedViner)
- pool: Set EC pool status to ready after reconcile (#17200, @OdedViner)
- pool: Skip mirroring if the data pool is erasure-coded (#17143, @parth-gr)
- exporter: Delete orphaned ceph-exporter deployments on reconcile (#17165, @adilGhaffarDev)
- exporter: Reconcile as best effort during deletion and ensure all clusters reconciled (#17164, @travisn)
- exporter: Add configurable port for ceph exporter (#17116, @OdedViner)
- rgw: Create correct IPv6 formatted secret for object store users (#17161, @parth-gr)
- helm: Allow annotations and labels for CephCluster (#17046, @sathieu)
- osd: Check devlinks while cleaning osd disks (#17123, @sp98)
- osd: Update lockbox key rotation for encrypted OSDs (#17112, @BlaineEXE)
- osd: Set device-type label on update (#17113, @satoru-takeuchi)
- rgw: Support new RGW pools in shared pools zone json config (#17102, @arttor)
- rgw: ObjectStore controller to wait until zone and sharedPools are reconciled (#17101, @arttor)
v1.18.10
Improvements
Rook v1.18.10 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
- exporter: Delete orphaned ceph-exporter deployments on reconcile (#17165, @adilGhaffarDev)
- exporter: Add log collector for ceph exporter pod (#16584, @subhamkrai)
- rbac: Remove nodes/proxy rbac grants (#16979, @ibotty)
- osd: Update lockbox key rotation for encrypted OSDs (#17112, @BlaineEXE)
- osd: In cephx key init, don't overwrite key on failure (#17052, @BlaineEXE)
- osd: Find correct osd container in case it is not index 0 (#16969, @kyrbrbik)
- osd: Fix updateExistingOSDs function for cancelled context (#17022, @sp98)
- nfs: Add CephNFS.spec.server.{image,imagePullPolicy} fields (#16982, @jhoblitt)
v1.19.2
Improvements
Rook v1.19.2 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
- csi: Update imagePullPolicy in operatorconfig and driver CR (#17084, @iPraveenParihar)
- osd: Fix OSDs on multipath device with metadata device (#17083, @satoru-takeuchi)
- build: Publish images with buildx instead of manifest-tool (#17079, @subhamkrai)
- rgw: Update status info when http is disabled (#17050, @sp98)
- object: Add ObjectStoreUserSpec.OpMask field (#17037, @jhoblitt)
- exporter: Add log collector for ceph exporter pod (#16584, @subhamkrai)
- csi: Update ceph-csi image to 3.16.1 (#17060, @iPraveenParihar)
- build(deps): bump sigs.k8s.io/controller-runtime from 0.22.4 to 0.23.0 in the k8s-dependencies group (#16963, @dependabot[bot])
- osd: In cephx key init, don't overwrite key on failure (#17052, @BlaineEXE)
- nvmeof: Add default gateway topology spread constraints (#17074, @OdedViner)
- nvmeof: Update expansion fields and sidecar images (#17019, @OdedViner)
v1.19.1
Improvements
Rook v1.19.1 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
csi: Update to ceph csi operator to v0.5 (#17029, @subhamkrai)
security: Remove unnecessary nodes/proxy RBAC enablement (#16979, @ibotty)
helm: Set default ceph image pull policy (#16954, @travisn)
nfs: Add CephNFS.spec.server.{image,imagePullPolicy} fields (#16982, @jhoblitt)
osd: Assign correct osd container in case it is not index 0 (#16969, @kyrbrbik)
csi: Remove obsolete automated node fencing code (#16922, @subhamkrai)
osd: Enable proper cancellation during OSD reconcile (#17022, @sp98)
csi: Allow running the csi controller plugin on host network (#16972, @Madhu-1)
rgw: Update ca bundle mount perms to read-all (#16968, @BlaineEXE)
mon: Change do-not-reconcile to be more granular for individual mons (#16939, @travisn)
build(deps): Bump the k8s-dependencies group with 6 updates (#16846, @dependabot[bot])
doc: add csi-operator example in configuration doc (#17001, @subhamkrai)
v1.19.0
Upgrade Guide
To upgrade from previous versions of Rook, see the Rook upgrade guide.
Breaking Changes
- The supported Kubernetes versions are v1.30 - v1.35
- The minimum supported Ceph version is v19.2.0. Rook v1.18 clusters running Ceph v18 must upgrade
to Ceph v19.2.0 or higher before upgrading Rook. - The behavior of the
activeStandbyproperty in theCephFilesystemCRD has changed. When set tofalse, the standby MDS daemon deployment will be scaled down and removed, rather than only disabling the standby cache while the daemon remains running. - Helm: The
rook-ceph-clusterchart has changed where the Ceph image is defined, to allow separate settings for the repository and tag. For more details, see the Rook upgrade guide. - In external mode, when users provide a Ceph admin keyring to Rook, Rook will no longer create CSI Ceph clients automatically. This approach will provide more consistency to configure external mode clusters via the same external Python script.
Features
- Experimental: NVMe over Fabrics (NVMe-oF) allows RBD volumes to be exposed and accessed via the NVMe/TCP protocol. This enables both Kubernetes pods within the cluster and external clients outside the cluster to connect to Ceph block storage using standard NVMe-oF initiators, providing high-performance block storage access over the network. See the NVMe-oF Configuration Guide to get started.
- CephCSI v3.16 Integration:
- NVMe-oF CSI driver for provisioning and mounting volumes over the NVMe over Fabrics protocol
- Improved fencing for RBD and CephFS volumes during node failure
- Block volume usage statistics
- Configurable block encryption cipher
- Experimental: Allow concurrent reconciles of the CephCluster CR when there multiple clusters being managed by the same Rook operator. Concurrency is enabled by increasing the operator setting
ROOK_RECONCILE_CONCURRENT_CLUSTERSto a value greater than1. - Improved logging with namespaced names for the controllers for more consistency in troubleshooting the rook operator log.
v1.18.9
Improvements
Rook v1.18.9 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
- csi: Disable read affinity for ceph v20.2.0 to avoid corruption (#16895, @travisn)
- core: Allow skipping cephcluster reconcile via do-not-reconcile label (#16874, @OdedViner)
- helm: Merge rook-config-override ConfigMap into toolbox ceph.conf (#16862, @mheler)
- helm: Add cephclusters/finalizers permission for mgr sidecar (#16854, @grandeit)
- csi: Add fix to support multiple fs mount option (#16837, @subhamkrai)
- operator: Watch cephConfigFromSecret changes (#16786, @cyanidium)
- rgw: Support all S3 notification events in CRD validation (#16804, @arttor)
- docs: Add pool parameter for erasure code optimizations (#16789, @travisn)
v1.18.8
Improvements
Rook v1.18.8 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
- core: Add support for ceph tentacle (#16501, @subhamkrai)
- helm: Include exporter options in CephCluster (#16745, @michaeltchapman)
- toolbox: Merge rook-config-override ConfigMap into ceph.conf (#16731, @mheler)
- csi: ControllerPlugin/NodePlugin resource settings were reversed (#16735, @swills)
- osd: Allow snaptrimp and snaptrip_wait PGs by the PDBs during node drains (#16713, @sp98)
- helm: Fix default pathType for HTTPRoute in the rook-ceph-cluster chart (#16724, @fancl20)
- pool: Retry if pool status is empty in the rados namespace controller (#16705, @parth-gr)
- namespace: Add retryOnConflict when updating status (#16661, @subhamkrai)
v1.18.7
Improvements
Rook v1.18.7 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
- pool: Retry pool status updates in the radosnamespace controller (#16700, @parth-gr)
- osd: Add device class label to the osd prepare pods (#16675, @parth-gr)
- external: Fix quote parsing and message in import-external-cluster.sh (#16646, @GanghyeonSeo)
- object: Fix user quotas being overwritten when obc bucketOwner is set (#16672, @jhoblitt)
- docs: Example of application migration between clusters (#16659, @travisn)
- mgr: Add hostNetwork field to Ceph Mgr spec (#16617, @Sunnatillo)
- osd: Add CephCluster
OSDMaxUpdatesInParallelto tune OSD updates (#16655, @jhoblitt)