Solution For Running Out of Inodes

The inode is a data structure in the Unix-like file system and which stores information about the file except for its name and the path of the file. An inode is always said to be metadata of data. A file in the Unix-like system is stored in two places on the disk – data block and inodes. The contents of the file stored in the data block and the inode contain the information about the data. The inode contains the following information:

1) Mode/permission (protection)

2) Owner ID

3) Group ID

4) Size of file

5) Number of hard links to the file

6) Time last accessed

7) Time last modified

8) Time inode last modified

 

To list the inode of files on a directory, we can use the following command.

# cd /

# ls -lai

 

total 132

2 drwxr-xr-x  24 root root   4096 Feb 26 13:31 .

2 drwxr-xr-x  24 root root   4096 Feb 26 13:31 ..

2637825 drwxr-xr-x   2 root root   4096 Jan 14 19:02 bin

196609 drwxr-xr-x   3 root root   4096 Feb 24 10:41 boot

3 drwxr-xr-x  16 root root   4460 Mar  5 09:35 dev

983041 drwxr-xr-x 206 root root  12288 Mar  5 07:45 etc

2 drwxr-xr-x  14 root root   4096 Dec 29 09:24 home

One of the major issues on the server-side is the disk space issues and high load average issues. Most likely you got this message “No space left on device” or “disk is full” despite having enough space free on your server. If you ever face this trouble, most likely it’s because your server exceeds the available inodes. Let’s discuss the solution for this problem;

 

Solution

1) The first step is for it to check whether your server has enough free disk space. Use the below-given command for checking the available disk space on the server.

# df -h

Filesystem         1K-blocks    Used     Available   Use%     Mounted on

/dev/xvda         33030016   10407780  22622236   32%     /

tmpfs             368748     0        368748     0%      /lib/init/rw

varrun            368748     56       368692      1%     /var/run

varlock            368748     0        368748     0%      /var/lock

udev              368748     108      368640     1%      /dev

tmpfs             368748      0       368748     0%      /dev/shm

 

2) The second step is to check the available inodes on your server. For that use below command to check the available inodes on the server.

# df -i

Filesystem         Inodes     IUsed      IFree     IUse%   Mounted on

/dev/xvda         2080768    2080768     0      100%    /

tmpfs             92187      3          92184   1%     /lib/init/rw

varrun            92187      38          92149   1%    /var/run

varlock            92187      4          92183   1%    /var/lock

udev              92187     4404        87783   5%    /dev

tmpfs             92187       1         92186   1%    /dev/shm

If the result has 100% or near IUse% value, then a large number of files is the reason for this issue.

 

3) The next step is to find those files. For that, we can use a small script that will list the directories and the number of files on them.

#  for i in /*; do echo $i; find $i |wc -l; done

From the output, you can see the directory which uses a large number of files, then repeat this script for that directory like below. Repeat it until you see the suspected directory.

#  for i in /home/*; do echo $i; find $i |wc -l; done

 

4) When you find the suspected directory with a large number of unwanted files. Just delete the unwanted files on that directory and free up some inode space by the following command.

#  rm -rf /home/bad_user/directory_with_lots_of_empty_files

You have successfully solved the problem. Check the inode usage now with the df -i command again, you can see the difference like this.

# df -i

Filesystem            Inodes    IUsed    IFree  IUse%  Mounted on

/dev/xvda            2080768  284431  1796337    14%   /

tmpfs                92187       3    92184     1%    /lib/init/rw

varrun                92187      38    92149     1%   /var/run

varlock                92187       4    92183     1%   /var/lock

udev                  92187    4404    87783     5%   /dev

tmpfs                 92187       1    92186     1%   /dev/shm

Linux KVM – How to Resize Virtual disk on the fly?

  1. Login to the Guest VM (UAKVM2) and identify which disk require to resize.
[root@UA-KVM1 ~]# df -h /orastage
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdc       1014M   33M  982M   4% /orastage
[root@UA-KVM1 ~]# mount -v |grep /orastage
/dev/vdc on /orastage type xfs (rw,relatime,attr2,inode64,noquota)
[root@UA-KVM1 ~]#
[root@UA-KVM1 ~]# fdisk -l /dev/vdc

Disk /dev/vdc: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@UA-KVM1 ~]#

 

2. Login to the KVM hypervisor which hosts the VM

3. Identify the virtual disk mapping for the KVM guest.

[root@UA-HA ~]# virsh domblklist UAKVM2 --details
Type       Device     Target     Source
------------------------------------------------
file       disk       vda        /var/lib/libvirt/images/UAKVM2.qcow2
block      disk       vdb        /dev/sdb
file       disk       vdc        /var/lib/libvirt/images/UAKVM2.disk2.qcow2
block      cdrom      hda        -

[root@UA-HA ~]#

4. Refresh the KVM storage pool.

[root@UA-HA ~]# virsh pool-list
 Name                 State      Autostart
-------------------------------------------
 default              active     yes
 [root@UA-HA ~]#
[root@UA-HA ~]# virsh pool-refresh default
Pool default refreshed
[root@UA-HA ~]#

5. List the virtual disks using virsh-vol list command. (vdc = UAKVM2.disk2.qcow2)

[root@UA-HA ~]# virsh vol-list  default
 Name                 Path
------------------------------------------------------------------------------
 UAKVM2.disk2.qcow2   /var/lib/libvirt/images/UAKVM2.disk2.qcow2
 UAKVM2.disk3.img     /var/lib/libvirt/images/UAKVM2.disk3.img
 UAKVM2.disk4.img     /var/lib/libvirt/images/UAKVM2.disk4.img
 UAKVM2.qcow2         /var/lib/libvirt/images/UAKVM2.qcow2
[root@UA-HA ~]#

6. Use “qemu-monitor” to list the allocated block devices to “UAKVM2” domain.

[root@UA-HA ~]# virsh qemu-monitor-command UAKVM2 --hmp "info block"
drive-virtio-disk0: removable=0 io-status=ok file=/var/lib/libvirt/images/UAKVM2.qcow2 ro=0 drv=qcow2 encrypted=0 bps=0 bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0
drive-virtio-disk1: removable=0 io-status=ok file=/dev/sdb ro=0 drv=raw encrypted=0 bps=0 bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0
drive-virtio-disk2: removable=0 io-status=ok file=/var/lib/libvirt/images/UAKVM2.disk2.qcow2 ro=0 drv=raw encrypted=0 bps=0 bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0
drive-ide0-0-0: removable=1 locked=0 tray-open=0 io-status=ok [not inserted]
[root@UA-HA ~]#

From the above command output, we can see that virtual disk “UAKVM2.disk2.qcow2” is mapped to drive-virtio-disk2.

7. Increase the virtual disk size and intimate the virtio driver about the changes. (Do not reduce the disk size !!!)

[root@UA-HA images]# virsh qemu-monitor-command UAKVM2 --hmp "block_resize drive-virtio-disk2 2G"
[root@UA-HA images]#

8. Login to the KVM guest – UAKVM2 and check the “vdc” disk size.

[root@UA-KVM1 ~]# fdisk -l /dev/vdc

Disk /dev/vdc: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@UA-KVM1 ~]#

9. Extend the filesystem. My filesystem type is XFS.

[root@UA-KVM1 ~]# df -h /orastage
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdc       1014M   33M  982M   4% /orastage
[root@UA-KVM1 ~]# mount -v |grep /orastage
/dev/vdc on /orastage type xfs (rw,relatime,attr2,inode64,noquota)
[root@UA-KVM1 ~]#
[root@UA-KVM1 ~]# xfs_growfs /orastage/
meta-data=/dev/vdc               isize=256    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 262144 to 1310720
[root@UA-KVM1 ~]#
[root@UA-KVM1 ~]# df -h /orastage/
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdc        2.0G   33M  2.0G   1% /orastage
[root@UA-KVM1 ~]#

We have successfully resized virtual size and intimated to virtio driver about the changes. No specific instructions required for the VM to see the new disk size.