Memory on modern operating systems is a complicated topic. This article covers what memory is and how the Linux kernel manages memory. The next articles look at the memory information shown in free / top and sar.
Computer memory is used to quickly store and retrieve data. Getting data from memory is much quicker than getting data from a storage device (such as an SSD) and the Linux kernel therefore uses memory as much as possible. This hugely improves system performance. However, unlike data stored on an SSD, memory is volatile. When the system powers off everything in memory is lost.
The operating system is responsible for managing memory: it writes data to memory, keeps an eye on the memory usage and deals with low memory situations. If the system has a swap area then it can use that to write memory data to disk when a system is running out of memory. If that’s not possible the kernel may kill certain services to free up memory.
Linux is very transparent in how it manages memory and there are lots of utilities that can help troubleshoot memory-related issues. However, to get the most out of the tools you need to understand memory from the kernel’s perspective. That involves learning some jargon.
The first bits of jargon to cover are pages and the page cache. A “page” is a fix-length block of memory. Just like data on a storage device, virtual memory is stored in blocks. The default page size is 4096 bytes (4 kibibytes). The “page cache” are pages stored in memory.
The kernel uses the page cache both when you read and write data. So, when you write data it is first written to the page cache. At that point the memory is known as dirty, or dirty pages. That sounds bad, but it is simply data that is stored in the cache and at some point needs to be written to the disk. The adjective “dirty” isn’t quite accurate, but it is too late to change that now.
Similarly, data you read is also written to the page cache. This speeds up things, as the data can be read from memory if it is requested again. Data read from the disk isn’t always requested again, so there is some waste here. However, the same data is requested again often enough that it makes sense to use the page cache.
Of course, memory is finite. As more memory is used the kernel needs to evict some data from the cache. Dirty pages are never evicted – instead the kernel can write them to the disk to free up memory. For “clean” pages the kernel uses an algorithm to decide which pages to evict. In simple terms, the kernel makes an educated guess about what pages are unlikely to be needed again. Pages in the cache that haven’t been accessed for a long time are at the top of the list.
You can tweak when dirty data is written to disk via various
sysctl settings. The main setting is vm.dirty_background_ratio:
# sysctl vm.dirty_background_ratio vm.dirty_background_ratio = 10
The setting is the percentage of available memory at which the kernel starts writing dirty data to disk. The default is 10. So, if your free and cached memory add up to 8GB then the kernel starts writing out dirty data when you have 800MB worth of dirty pages.
There are a handful of other
sysctl settings that manage how the kernel deals with dirty pages. Those settings are beyond the scope of this article. It all gets very technical very quickly, and the aim of this article is to help you understand the memory information shown by utilities such as
free. If you want to learn more about configuring memory management, the kernel documentation is a good place to start.