Skip to content

cephfs: round to cephfs size to multiple of 4Mib#3241

Merged
mergify[bot] merged 1 commit intoceph:develfrom
Madhu-1:fix-3240
Jul 13, 2022
Merged

cephfs: round to cephfs size to multiple of 4Mib#3241
mergify[bot] merged 1 commit intoceph:develfrom
Madhu-1:fix-3240

Conversation

@Madhu-1
Copy link
Collaborator

@Madhu-1 Madhu-1 commented Jul 12, 2022

Due to the bug in the df stat we need to round off the subvolume size to align with 4Mib.

Note:- Minimum supported size in cephcsi is 1Mib,
we dont need to take care of Kib.

fixes #3240

More details at ceph/ceph#46905

Signed-off-by: Madhu Rajanna madhupr007@gmail.com

Logs with fix

3Mib PVC

[🎩︎]mrajanna@fedora cephfs $]kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cephfs-pvc   Bound    pvc-0e91cc59-785a-4bf9-be18-a180fa3ec70f   4Mi        RWO            rook-cephfs    2s
[🎩︎]mrajanna@fedora cephfs $]cat pvc.yaml 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cephfs-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Mi
  storageClassName: rook-cephfs
[🎩︎]mrajanna@fedora cephfs $]kubectl create -f pod.yaml 
pod/csicephfs-demo-pod created
[🎩︎]mrajanna@fedora cephfs $]kubectl exec -it pod/csicephfs-demo-pod -- sh
# df -h
Filesystem                                                                                                        Size  Used Avail Use% Mounted on
overlay                                                                                                            28G  6.5G   20G  25% /
tmpfs                                                                                                              64M     0   64M   0% /dev
tmpfs                                                                                                             2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/vda1                                                                                                          28G  6.5G   20G  25% /etc/hosts
shm                                                                                                                64M     0   64M   0% /dev/shm
10.97.50.161:6789:/volumes/csi/csi-vol-d67b85ab-01b9-11ed-9045-e63b4f507f02/1926b8ae-4ceb-440b-ab24-caa7756e965f  4.0M     0  4.0M   0% /var/lib/www/html
tmpfs                                                                                                             3.9G   12K  3.9G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                                                                                             2.0G     0  2.0G   0% /proc/acpi
tmpfs                                                                                                             2.0G     0  2.0G   0% /proc/scsi
tmpfs                                                                                                             2.0G     0  2.0G   0% /sys/firmware
# 

4Mib PVC

[🎩︎]mrajanna@fedora cephfs $]cat pvc.yaml 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cephfs-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Mi
  storageClassName: rook-cephfs
[🎩︎]mrajanna@fedora cephfs $]kubectl create -f pvc.yaml 
persistentvolumeclaim/cephfs-pvc created
[🎩︎]mrajanna@fedora cephfs $]kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cephfs-pvc   Bound    pvc-d906625b-5181-4e38-8596-842fa29286db   4Mi        RWO            rook-cephfs    2s
[🎩︎]mrajanna@fedora cephfs $]kubectl create -f pod.yaml 
pod/csicephfs-demo-pod created
[🎩︎]mrajanna@fedora cephfs $]kubectl exec -it pod/csicephfs-demo-pod -- sh
# df -h
Filesystem                                                                                                        Size  Used Avail Use% Mounted on
overlay                                                                                                            28G  6.5G   20G  25% /
tmpfs                                                                                                              64M     0   64M   0% /dev
tmpfs                                                                                                             2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/vda1                                                                                                          28G  6.5G   20G  25% /etc/hosts
shm                                                                                                                64M     0   64M   0% /dev/shm
10.97.50.161:6789:/volumes/csi/csi-vol-0ebfc24a-01ba-11ed-9045-e63b4f507f02/a05967d1-5b9e-4d5a-9e90-f68c939af7bc  4.0M     0  4.0M   0% /var/lib/www/html
tmpfs                                                                                                             3.9G   12K  3.9G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                                                                                             2.0G     0  2.0G   0% /proc/acpi
tmpfs                                                                                                             2.0G     0  2.0G   0% /proc/scsi
tmpfs                                                                                                             2.0G     0  2.0G   0% /sys/firmware
# 

1023 Mib PVC

[🎩︎]mrajanna@fedora cephfs $]cat pvc.yaml 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cephfs-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1023Mi
  storageClassName: rook-cephfs
[🎩︎]mrajanna@fedora cephfs $]kubectl create -f pvc.yaml 
persistentvolumeclaim/cephfs-pvc created
[🎩︎]mrajanna@fedora cephfs $]kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cephfs-pvc   Bound    pvc-73babf43-2164-4b86-9ca2-9fb896822675   1Gi        RWO            rook-cephfs    3s
[🎩︎]mrajanna@fedora cephfs $]kubectl create -f pod.yaml 
pod/csicephfs-demo-pod created
[🎩︎]mrajanna@fedora cephfs $]kubectl execectl exec -it pod/csicephfs-demo-pod -- sh
# df -h
Filesystem                                                                                                        Size  Used Avail Use% Mounted on
overlay                                                                                                            28G  6.5G   20G  25% /
tmpfs                                                                                                              64M     0   64M   0% /dev
tmpfs                                                                                                             2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/vda1                                                                                                          28G  6.5G   20G  25% /etc/hosts
shm                                                                                                                64M     0   64M   0% /dev/shm
10.97.50.161:6789:/volumes/csi/csi-vol-37269de4-01ba-11ed-9045-e63b4f507f02/9ee1f2dc-50fd-401d-ae48-b1a611bd9a74  1.0G     0  1.0G   0% /var/lib/www/html
tmpfs                                                                                                             3.9G   12K  3.9G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                                                                                             2.0G     0  2.0G   0% /proc/acpi
tmpfs                                                                                                             2.0G     0  2.0G   0% /proc/scsi
tmpfs                                                                                                             2.0G     0  2.0G   0% /sys/firmware
# 

@mergify mergify bot added the component/cephfs Issues related to CephFS label Jul 12, 2022
@Madhu-1 Madhu-1 requested review from a team July 12, 2022 08:56
Copy link
Contributor

@yati1998 yati1998 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Member

@nixpanic nixpanic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks correct to me, round up to a multiple of 4 MiB. Implementation could be a little clearer, as it required me to track each step for verification.

return 4 * helpers.MiB
}

bytes /= helpers.MiB
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this naming is confusing, these are not bytes anymore, but MiBs


bytes /= helpers.MiB

bytes = int64(math.Ceil(float64(bytes)/4) * 4)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, multiple steps at once:

  1. convert to float
  2. divide MiBs by 4 (4.8 / 4 = 1.2)
  3. round upwards (1.2 => 2)
  4. multiply by four again (2 * 4 = 8)
  5. convert back to int

seems to be correct


bytes = int64(math.Ceil(float64(bytes)/4) * 4)

return RoundOffBytes(bytes * helpers.MiB)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

convert bytes (in MiB) back to bytes

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Jul 13, 2022

@Mergifyio rebase

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Jul 13, 2022

CI is stuck, restarting tests with rebase

@mergify
Copy link
Contributor

mergify bot commented Jul 13, 2022

rebase

✅ Branch has been successfully rebased

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Jul 13, 2022

@Mergifyio rebase

Due to the bug in the df stat we need to round off
the subvolume size to align with 4Mib.

Note:- Minimum supported size in cephcsi is 1Mib,
we dont need to take care of Kib.

fixes ceph#3240

More details at ceph/ceph#46905

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
@mergify
Copy link
Contributor

mergify bot commented Jul 13, 2022

rebase

✅ Branch has been successfully rebased

@Madhu-1 Madhu-1 added ready-to-merge This PR is ready to be merged and it doesn't need second review (backports only) ci/retry/e2e Label to retry e2e retesting on approved PR's and removed ready-to-merge This PR is ready to be merged and it doesn't need second review (backports only) labels Jul 13, 2022
@ceph-csi-bot
Copy link
Collaborator

/retest ci/centos/mini-e2e-helm/k8s-1.22

@ceph-csi-bot
Copy link
Collaborator

@Madhu-1 "ci/centos/mini-e2e-helm/k8s-1.22" test failed. Logs are available at location for debugging

@ceph-csi-bot
Copy link
Collaborator

/retest ci/centos/k8s-e2e-external-storage/1.22

@ceph-csi-bot
Copy link
Collaborator

@Madhu-1 "ci/centos/k8s-e2e-external-storage/1.22" test failed. Logs are available at location for debugging

@ceph-csi-bot
Copy link
Collaborator

/retest ci/centos/mini-e2e-helm/k8s-1.21

@ceph-csi-bot
Copy link
Collaborator

@Madhu-1 "ci/centos/mini-e2e-helm/k8s-1.21" test failed. Logs are available at location for debugging

@ceph-csi-bot
Copy link
Collaborator

/retest ci/centos/mini-e2e-helm/k8s-1.23

@ceph-csi-bot
Copy link
Collaborator

@Madhu-1 "ci/centos/mini-e2e-helm/k8s-1.23" test failed. Logs are available at location for debugging

@ceph-csi-bot
Copy link
Collaborator

@Mergifyio requeue

@mergify
Copy link
Contributor

mergify bot commented Jul 13, 2022

requeue

❌ This pull request head commit has not been previously disembarked from queue.

@ceph-csi-bot
Copy link
Collaborator

/retest ci/centos/k8s-e2e-external-storage/1.22

@ceph-csi-bot
Copy link
Collaborator

@Madhu-1 "ci/centos/k8s-e2e-external-storage/1.22" test failed. Logs are available at location for debugging

@ceph-csi-bot
Copy link
Collaborator

@Mergifyio requeue

@mergify
Copy link
Contributor

mergify bot commented Jul 13, 2022

requeue

❌ This pull request head commit has not been previously disembarked from queue.

@ceph-csi-bot
Copy link
Collaborator

/retest ci/centos/k8s-e2e-external-storage/1.22

@ceph-csi-bot
Copy link
Collaborator

@Madhu-1 "ci/centos/k8s-e2e-external-storage/1.22" test failed. Logs are available at location for debugging

@ceph-csi-bot
Copy link
Collaborator

/retest ci/centos/mini-e2e-helm/k8s-1.21

@ceph-csi-bot
Copy link
Collaborator

@Madhu-1 "ci/centos/mini-e2e-helm/k8s-1.21" test failed. Logs are available at location for debugging

@ceph-csi-bot
Copy link
Collaborator

@Mergifyio requeue

@mergify
Copy link
Contributor

mergify bot commented Jul 13, 2022

requeue

❌ This pull request head commit has not been previously disembarked from queue.

@ceph-csi-bot
Copy link
Collaborator

/retest ci/centos/k8s-e2e-external-storage/1.22

@ceph-csi-bot
Copy link
Collaborator

@Madhu-1 "ci/centos/k8s-e2e-external-storage/1.22" test failed. Logs are available at location for debugging

@ceph-csi-bot
Copy link
Collaborator

@Mergifyio requeue

@mergify
Copy link
Contributor

mergify bot commented Jul 13, 2022

requeue

❌ This pull request head commit has not been previously disembarked from queue.

@ceph-csi-bot
Copy link
Collaborator

/retest ci/centos/k8s-e2e-external-storage/1.22

@ceph-csi-bot
Copy link
Collaborator

@Madhu-1 "ci/centos/k8s-e2e-external-storage/1.22" test failed. Logs are available at location for debugging

@ceph-csi-bot
Copy link
Collaborator

@Mergifyio requeue

@mergify
Copy link
Contributor

mergify bot commented Jul 13, 2022

requeue

❌ This pull request head commit has not been previously disembarked from queue.

@mergify mergify bot merged commit f171143 into ceph:devel Jul 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci/retry/e2e Label to retry e2e retesting on approved PR's component/cephfs Issues related to CephFS

Projects

None yet

Development

Successfully merging this pull request may close these issues.

df output is not matching with cephfs pvc size

5 participants