Is this a bug report or feature request?
Deviation from expected behavior:
In a small cluster such as 3 OSDs, the default PG count mon_max_pg_per_osd is very limiting for clusters since the default PG count changed from 8 to 32 in v14.2.8. Now all you have to do is create an object store and one block pool before you hit the pool limit when replicas.size is 3 for all the pools (the recommended default).
As a workaround, the max PGs per OSD (default is 250) can be increased from the toolbox with the following command:
ceph config set mon mon_max_pg_per_osd 500
However, Rook users would like to avoid PG management to keep things simple, especially in common scenarios.
@jdurgin What is your recommendation on how to deal with the new default. Increasing the max PGs per OSD seems like a temporary fix rather than a good recommendation.
Expected behavior:
Rook users should be able to create several pools, object stores, and filesystems on a basic cluster (3 OSDs) before hitting limits.
How to reproduce it (minimal and precise):
- Create a cluster on v14.2.8 or newer
kubectl create -f pool.yaml
kubectl create -f object.yaml
The object store will fail to be configured and rgw daemons will not start due to errors with pool creation.
Is this a bug report or feature request?
Deviation from expected behavior:
In a small cluster such as 3 OSDs, the default PG count
mon_max_pg_per_osdis very limiting for clusters since the default PG count changed from 8 to 32 in v14.2.8. Now all you have to do is create an object store and one block pool before you hit the pool limit when replicas.size is 3 for all the pools (the recommended default).As a workaround, the max PGs per OSD (default is 250) can be increased from the toolbox with the following command:
However, Rook users would like to avoid PG management to keep things simple, especially in common scenarios.
@jdurgin What is your recommendation on how to deal with the new default. Increasing the max PGs per OSD seems like a temporary fix rather than a good recommendation.
Expected behavior:
Rook users should be able to create several pools, object stores, and filesystems on a basic cluster (3 OSDs) before hitting limits.
How to reproduce it (minimal and precise):
kubectl create -f pool.yamlkubectl create -f object.yamlThe object store will fail to be configured and rgw daemons will not start due to errors with pool creation.