Memory concepts

Following are some commonly mentioned memory related terminologies:

  • Main memory: Also referred to as physical memory, this describes the fast data storage area of a computer, commonly provided as DRAM.
  • Virtual memory: An abstraction of main memory that is (almost) infinite and non-contended. Virtual memory is not real memory.
  • Resident memory: Memory that currently resides in main memory.
  • Anonymous memory: Memory with no file system location or path name. It includes the working data of a process address space, called the heap.
  • Address space: A memory context. There are virtual address spaces for each process, and for the kernel.
  • Segment: An area of virtual memory flagged for a particular purpose, such as for storing executable or writeable pages.
  • Instruction text: Refers to CPU instructions in memory, usually in a segment.
  • OOM: Out of memory, when the kernel detects low available memory.
  • Page: A unit of memory, as used by the OS and CPUs. Historically it is either 4 or 8 Kbytes. Modern processors have multiple page size support for larger sizes.
  • Page fault: An invalid memory access. These are normal occurrences when using on-demand virtual memory.
  • Paging: The transfer of pages between main memory and the storage devices.
  • Swapping: Linux uses the term swapping to refer to anonymous paging to the swap device (the transfer of swap pages). In Unix and other operating systems, swapping is the transfer of entire processes between main memory and the swap devices. This book uses the Linux version of the term.
  • Swap: An on-disk area for paged anonymous data. It may be an area on a storage device, also called a physical swap device, or a file system file, called a swap file. Some tools use the term swap to refer to virtual memory (which is confusing and incorrect).

Virtual Memory

Virtual memory is an abstraction that provides each process and the kernel with its own large, linear, and private address space. It simplifies software development, leaving physical memory placement for the operating system to manage. It also supports multitasking (virtual address spaces are separated by design) and oversubscription (in-use memory can extend beyond main memory).

Images

Paging

Paging is the movement of pages in and out of main memory, which are referred to as page-ins and page-outs, respectively.

File System Paging: File system paging is caused by the reading and writing of pages in memory-mapped files. This is normal behavior for applications that use file memory mappings (mmap(2)) and on file systems that use the page cache 
Anonymous Paging (Swapping): Anonymous paging involves data that is private to processes: the process heap and stacks. It is termed anonymous because it has no named location in the operating system (i.e., no file system path name). Anonymous page-outs require moving the data to the physical swap devices or swap files. Linux uses the term swapping to refer to this type of paging.

Demand Paging

Operating systems that support demand paging (most do) map pages of virtual memory to physical memory on demand. This defers the CPU overhead of creating the mappings until they are actually needed and accessed, instead of at the time a range of memory is first allocated.

Images

If the mapping can be satisfied from another page in memory, it is called a minor fault. Page faults that require storage device access (not shown in this figure), such as accessing an uncached memory-mapped file, are called major faults.

States of a page in virtual memory:
A. Unallocated
B. Allocated, but unmapped (unpopulated and not yet faulted
C. Allocated, and mapped to main memory (RAM)
D. Allocated, and mapped to the physical swap device (disk)

  • Resident set size (RSS): The size of allocated main memory pages (C)
  • Virtual memory size: The size of all allocated areas (B + C + D)

Overcommit

Linux supports the notion of overcommit, which allows more memory to be allocated than the system can possibly store—more than physical memory and swap devices combined. It relies on demand paging and the tendency of applications to not use much of the memory they have allocated.

Process Swapping

Process swapping is the movement of entire processes between main memory and the physical swap device or swap file.

File System Cache Usage

It is normal for memory usage to grow after system boot as the operating system uses available memory to cache the file system, improving performance. The principle is: If there is spare main memory, use it for something useful. 

Utilization and Saturation

Main memory utilization can be calculated as used memory versus total memory. Memory used by the file system cache can be treated as unused, as it is available for reuse by applications. If demands for memory exceed the amount of main memory, main memory becomes saturated

Allocators

While virtual memory handles multitasking of physical memory, the actual allocation and placement within a virtual address space are often handled by allocators. 

Shared Memory

Memory can be shared between processes. This is commonly used for system libraries to save memory by sharing one copy of their read-only instruction text with all processes that use it.

Proportional set size (PSS)

Private memory (not shared) plus shared memory divided by the number of users.

Working Set Size

Working set size (WSS) is the amount of main memory a process frequently uses to perform work. 

Word Size

Processors may support multiple word sizes, such as 32-bit and 64-bit, allowing software for either to run. As the address space size is bounded by the addressable range from the word size, applications requiring more than 4 Gbytes of memory are too large for a 32-bit address space and need to be compiled for 64 bits or higher.

How to Add Memory, vCPU, Hard Disk to Linux KVM Virtual Machine

n this example, let us increase the memory of myRHELVM1’s VM from 2GB to 4GB.

First, shutdown the VM using virsh shutdown as shown below:

# virsh shutdown myRHELVM1
Domain myRHELVM1 is being shutdown

Next, edit the VM using virsh edit:

# virsh edit myRHELVM1

Look for the below line and change the value for memory to the following. In my example, earlier it was 2097152:

<memory unit='KiB'>4194304</memory>

Please note that the above value is in KB. After making the change, save and exit:

# virsh edit myRHELVM1
Domain myRHELVM1 XML configuration edited.

Restart the VM with the updated configuration file. Now you will see the max memory increased from 2G to 4G.

You can now dynamically modify the VM memory upto the 4G max limit.

Create the Domain XML file using virsh create

# virsh create /etc/libvirt/qemu/myRHELVM1.xml
Domain myRHELVM1 created from /etc/libvirt/qemu/myRHELVM1.xml

View the available Memory for this domain. As you see below, even though the maximum available memory is 4GB, this domain only has 2GB (Used memory).

# virsh dominfo myRHELVM1 | grep memory
Max memory:     4194304 KiB
Used memory:    2097152 KiB

Set the memory for this domain to 4GB using virsh setmem as shown below:

# virsh setmem myRHELVM1 4194304

Now, the following indicates that we’ve allocated 4GB (Used memory) to this domain.

# virsh dominfo myRHELVM1 | grep memory
Max memory:     4194304 KiB
Used memory:    4194304 KiB

2. Add VCPU to VM

To increase the virtual CPU that is allocated to the VM, do virsh edit, and change the vcpu parameter as explained below.

In this example, let us increase the memory of myRHELVM1’s VM from 2GB to 4GB.

First, shutdown the VM using virsh shutdown as shown below:

# virsh shutdown myRHELVM1
Domain myRHELVM1 is being shutdown

Next, edit the VM using virsh edit:

# virsh edit myRHELVM1

Look for the below line and change the value for vcpu to the following. In my example, earlier it was 2.

<vcpu placement='static'>4</vcpu>

Create the Domain XML file using virsh create

# virsh create /etc/libvirt/qemu/myRHELVM1.xml
Domain myRHELVM1 created from /etc/libvirt/qemu/myRHELVM1.xml

View the virtual CPUs allocated to this domain as shown below. This indicates that we’ve increased the vCPU from 2 to 4.

# virsh dominfo myRHELVM1 | grep -i cpu
CPU(s):         4
CPU time:       21.0s

3. Add Disk to VM

In this example, we have only two virtual disks (vda1 and vda2) on this VM.

# fdisk -l | grep vd
Disk /dev/vda: 10.7 GB, 10737418240 bytes
/dev/vda1   *           3        1018      512000   83  Linux
/dev/vda2            1018       20806     9972736   8e  Linux LVM

There are two steps involved in creating and attaching a new storage device to Linux KVM guest VM:

  • First, create a virtual disk image
  • Attach the virtual disk image to the VM

Let us create one more virtual disk and attach it to our VM. For this, we first need to create a disk image file using qemu-img create command as shown below.

In the following example, we are creating a virtual disk image with 7GB of size. The disk images are typically located under /var/lib/libvirt/images/ directory.

# cd /var/lib/libvirt/images/

# qemu-img create -f raw myRHELVM1-disk2.img 7G
Formatting 'myRHELVM1-disk2.img', fmt=raw size=7516192768

To attach the newly created disk image, use the virsh attach-disk command as shown below:

# virsh attach-disk myRHELVM1 --source /var/lib/libvirt/images/myRHELVM1-disk2.img --target vdb --persistent
Disk attached successfully

The above virsh attach-disk command has the following parameters:

  • myRHELVM1 The name of the VM
  • –source The full path of the source disk image. This is the one that we created using qemu-image command above. i.e: myRHELVM1-disk2.img
  • –target This is the device mount point. In this example, we want to attach the given disk image as /dev/vdb. Please note that we don’t really need to specify /dev. It is enough if you just specify vdb.
  • –persistent indicates that the disk that attached to the VM will be persistent.

As you see below, the new /dev/vdb is now available on the VM.

# fdisk -l | grep vd
Disk /dev/vda: 10.7 GB, 10737418240 bytes
/dev/vda1   *           3        1018      512000   83  Linux
/dev/vda2            1018       20806     9972736   8e  Linux LVM
Disk /dev/vdb: 7516 MB, 7516192768 bytes

Now, you can partition the /dev/vdb device, and create multiple partitions /dev/vdb1, /dev/vdb2, etc, and mount it to the VM. Use fdisk to create the partitions as we explained earlier.

Similarly to detach a disk from the guest VM, you can use the below command. But be careful to specify the correct vd* otherwise you may end-up removing wrong device.

# virsh detach-disk myRHELVM1 vdb
Disk detached successfully

4. Save Virtual Machine Configuration

If you make lot of changes to your VM, it is recommended that you save the configurations.

Use the virsh dumpxml file to take a backup and save the configuration information of your VM as shown below.

# virsh dumpxml myRHELVM1 > myrhelvm1.xml

# ls myrhelvm1.xml
myrhelvm1.xml

Once you have the configuration file in the XML format, you can always recreate your guest VM from this XML file, using virsh create command as shown below:

virsh create myrhelvm1.xml

5. Delete KVM Virtual Machine

If you’ve created multiple VMs for testing purpose, and like to delete them, you should do the following three steps:

  • Shutdown the VM
  • Destroy the VM (and undefine it)
  • Remove the Disk Image File

In this example, let us delete myRHELVM2 VM. First, shutdown this VM:

# virsh shutdown myRHELVM2
Domain myRHELVM2 is being shutdown

Next, destory this VM as shown below:

# virsh destroy myRHELVM2
Domain myRHELVM2 destroyed

Apart from destroying it, you should also undefine the VM as shown below:

# virsh undefine myRHELVM2
Domain myRHELVM2 has been undefined

Finally, remove any disk image file that you’ve created for this VM from the /var/lib/libvirt/images directory:
Now you can remove the disk img file under /var/lib/libvirt/images

rm /var/lib/libvirt/images/myRHELVM2-disk1.img
rm /var/lib/libvirt/images/myRHELVM2-disk2.img

MEMORY OVERCOMMIT SETTINGS

To see your memory system now, under ‘default’ settings, enter the following into terminal:

sudo cat /proc/meminfo

We can see lots of lines but the four we’re interested in are:

MemTotal: The total amount of physical RAM available on your system.

MemFree: The total amount of physical RAM not being used for anything.

CommitLimit: The total amount of memory, both RAM and SWAP, available to commit to the running and requested applications (not necessarily directly related to the actual physical RAM amount, we will see why later).

Commited_AS: The total amount of memory required in the worse case scenario right now if all the applications actually used what they asked for at startup!

If the application/s needed what they originally asked for, an out-of-memory or ‘OOM’ would happen. This would mean that the OOM-killer would kick in to try and free up actual memory by killing running processes it thinks might help to free up memory. By then though a kernel-panic (or at best  X11 would hang) might have happened resulting in a frozen system (aka blue-screen in MS terms) or of course OOM-killer killed a critical system process.

To solve the random selections of the OOM-killer potentially killing off a critical system process, or not kicking in prior to a kernel-panic, we can change the following:

vm.overcommit_ratio=100: The percentage of total actual memory resources available to applications. This might be the total of RAM + SWAP, or just RAM if you have no SWAP. (IE: RAM=1gb & SWAP=1gb, overcommit_ratio=100 would mean 2gb could be allocated to applications. overcommit_ratio=50 would mean 1gb could be allocated to applications – this would obviously not be a sensible choice as 1gb would never be used!)

vm.overcommit_memory=2: This tells the kernel to never agree to allocate more than the total percentage of actual memory determined by overcommit_ratio= and disables the OOM-killer daemon.

We can change the above settings by entering the following into terminal:

sudo sync    — this tells any files in cache on RAM to write to disk now

sudo sh -c “sync; echo 3 > /proc/sys/vm/drop_caches”    — this drops all caches from RAM

sudo cat /proc/meminfo    — check that Committed_AS is below CommitLimit

sudo sysctl -w vm.overcommit_ratio=99    — use 99% of physical memory

sudo sysctl -w vm.overcommit_memory=2    — only allow applications to start if there is enough memory determined by the above command

So now when we try to open a memory hungry application, or we have to many applications open already, the new application is refused with a notification that IE: ‘file manager failed to fork’, or failed to start because there isn’t the available memory. Potentially the application could theoretically start with what memory is available now, but it may continue to require memory to a point the system is unusable as a result and hangs or crashes. A web-browser would be a good example, it opens with only one tab, but during the day you open a dozen more, at some point memory would be exhausted.

By using the two above tweaks we end up with a system that cannot agree to give applications more memory allocation than it physically has. This stops hangs or kernel panics that render the entire system useless, potentially losing those last bits of information you were inputting, instead it simply tells you that there is no more memory, you need to go buy more RAM!

We now know our system will just tell us there’s no more memory for that new application to open, and we like it, we want these settings to survive power cycles (rebooting), we do this by adding the above commands into:

sudo gedit /etc/sysctl.conf    — I use gedit, but nano, vi etc all work

Add: sudo sysctl -w vm.overcommit_ratio=99 and sudo sysctl -w vm.overcommit_memory=2 to the bottom of that file on separate lines and save. Mine look like this:

#system tweaks
vm.swappiness=5
vm.vfs_cache_pressure=50
vm.overcommit_ratio=99
vm.overcommit_memory=2

(I use 99% just to give a little allowance).

Of course you could increase the size of your SWAP partition as CommitLimit is a total of RAM+SWAP (remembering that SWAP is disk based so slower than RAM) so you can open all those tabs, or applications without getting ‘failed to fork’ messages, or you could add a SWAP partition if you haven’t got one already.

“But I have an SSD and SWAP is bad”, well yes it is if you are constantly using it because you only have 1gb of RAM! If you have 4+gb of ram, and depending on what you use your system for, SWAP on an SSD would act as a final safety net saving you from kernel panic under stock settings, or by using the above settings it would stop the constant ‘failed to fork’, but if that’s a regular message following these changes i’d suggest you buy more RAM!

NB: Default is: vm.overcommit_memory=0 which means in short that no tabs are kept on actual available memory space, the kernel agrees to all requests for memory from applications and OOM-killer is activated, in my experience followed by hangs and reboots

check-new-release process eating up resources on Ubuntu

In the file /etc/update-manager/release-upgrades change Prompt=normal to Prompt=never.

You can also do this through the GUI, but that may not be appropriate for a virtual server.

You can do a manual check for a new release with do-release-upgrade

If you still the process coming up try doing:

apt-get remove ubuntu-release-upgrader-core

The script that you’re seeing running is /usr/lib/ubuntu-release-upgrader/check-new-release, and removing the above package will remove that script completely.

A description of this package is:

ubuntu-release-upgrader-core - manage release upgrades