Skip to content

Memory Pool

Virtual memory1 allows programs to use more memory than physically available by mapping logical addresses to physical ones, with the heap being a region within virtual memory for dynamic memory allocation. Functions like malloc in C and new in C++ are used to allocate memory from the heap, with malloc providing raw allocation and new offering both allocation and initialization.

However, frequent memory allocations can lead to performance overhead and fragmentation, which is where memory pools come in. A memory pool pre-allocates a large block of memory and divides it into smaller chunks for reuse, improving efficiency, reducing fragmentation, and enhancing performance in systems with critical memory needs.

Designs

Memory pool implementations often require specialization based on the specific needs of the application, where thread pool implementations tend to be more standardized across different systems.

Fixed-Size Block Pool

  • The memory pool consists of blocks of a fixed size. When a memory request is made, a block of the pre-determined size is allocated from the pool.
  • Best for applications where objects of the same size are frequently allocated and deallocated, such as handling objects of fixed size in a game engine or for networking buffers.

Variable-Size Block Pool

  • The pool can handle blocks of different sizes. This type often uses multiple free lists, one for each block size, to manage memory efficiently.
  • Suitable for applications where object sizes vary, such as database management systems or general-purpose applications.
  • More flexible than fixed-size pools, but requires more complex management and bookkeeping to avoid fragmentation.

Object Pool (Slab Allocator)

  • Memory is divided into slabs, each containing multiple objects of the same type. Each slab can hold a specific number of objects, and once the slab is full, a new one is allocated.
  • Frequently used in systems where a large number of objects of the same type are allocated and deallocated, like managing database connections or network requests.

Buddy Memory Pool

  • Memory is allocated in blocks that are powers of two in size When a larger block is needed, two smaller adjacent blocks (buddies) are merged. If a block is freed, it may be merged with its buddy to form a larger block.
  • Often used in operating systems for managing kernel memory and in applications where memory blocks need to be allocated in a flexible yet structured manager (e.g. for systems requiring variable block sizes).

Free List-based Pool

  • Memory is divided into a series of blocks, and a linked list (free list) is used to track free blocks. The allocator simply pops blocks from the free list when a request is made and pushes freed blocks back onto the list.
  • Often used in general-purpose applications where memory allocations and deallocations are relatively random, such as in custom memory allocators.

Implementation

An implementation of a Fixed-Size Pool:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

// 4KB page size, typically the size of a memory page in many systems
#define MEM_PAGE_SIZE 0x1000

typedef struct mempool_s {
  int blocksize; // size of each memory block in the pool
  int freecount; // number of free blocks remaining in the pool
  char *free_ptr; // pointer to the next free memory block
  char *mem; // pointer to the start of the allocated memory region
} mempool_t;

// create a memory pool with blocks of the specified size
// this pool will have a size of 4096 bytes (MEM_PAGE_SIZE),
// and each block is of the given block_size.
int memp_create(mempool_t *m, int block_size) {
  if (!m) return -1;

  m->blocksize = block_size;
  m->freecount = MEM_PAGE_SIZE / block_size;

  // allocate a memory page (4KB)
  m->mem = (char *)malloc(MEM_PAGE_SIZE);
  if (!m->mem) return -2; // memory allocation fails

  memset(m->mem, 0, MEM_PAGE_SIZE); // init the allocated memory with zeros

  m->free_ptr = m->mem;

  int i = 0;
  char *ptr = m->mem;

  // loop to set up the free blocks in the pool by linking them together
  for (i = 0; i < m->freecount; i++) {
    *(char **)ptr = ptr + block_size; // set the next free block pointer
    ptr = ptr + block_size; // move to the next block
  }

  *(char **)ptr = NULL; // the last block's "next" pointer is NULL
  return 0;
}

// destroy the memory pool by freeing the allocated memory
void memp_destory(mempool_t *m) {
  if (!m) return;
  free(m->mem);
}

// allocate a block of memory from the pool
void* memp_alloc(mempool_t *m) {
  if (!m || m->freecount == 0) return NULL;

  void *ptr = m->free_ptr;

  m->free_ptr = *(char **)ptr; // set the free pointer to the next free block
  m->freecount--; // decrease the count of free blocks

  return ptr;
}

// free a previously allocated block and return it to the pool
void memp_free(mempool_t *m, void *ptr) {
  *(char **)ptr = m->free_ptr; // insert ptr before the current free pointer
  m->free_ptr = (char *)ptr; // update the free pointer to the freed block
  m->freecount++;
}

int main() {
  mempool_t m;
  memp_create(&m, 32); // create a memory pool with blocks of size 32 bytes

  void *p1 = memp_alloc(&m);
  printf("memp_alloc: p1 %p\n", p1);

  void *p2 = memp_alloc(&m);
  printf("memp_alloc: p2 %p\n", p2);

  memp_free(&m, p2);
  printf("memp_free: p2\n");

  void *p3 = memp_alloc(&m);
  printf("memp_alloc: p3 %p\n", p3);

  void *p4 = memp_alloc(&m);
  printf("memp_alloc: p4 %p\n", p4);

  memp_destory(&m);
  printf("memp_destory\n");
}
1
2
3
4
5
6
7
cmake_minimum_required(VERSION 3.10)

project(MemPoolDemo)

set(CMAKE_STANDARD 99)

add_executable(main main.c)
1
2
3
4
5
6
memp_alloc: p1 0x121009000
memp_alloc: p2 0x121009020
memp_free: p2
memp_alloc: p3 0x121009020
memp_alloc: p4 0x121009040
memp_destory

Notes on Libraries

  • tcmalloc: widely used in high-performance applications (e.g. databases, large-scale web servers) for efficient memory management. It's the default allocator for Google's internal systems.
  • jemalloc: commonly used in memory-intensive applications, including Firefox, Facebook, and Redis.
  • libpmem: used for applications that work with persistent memory, like databases, where memory needs to be reliably stored across crashes.

Long-running applications

Long-running applications can suffer from memory fragmentation, leading to inefficient memory use and allocation failures. Fragmentation, especially external, can prevent large contiguous memory allocations, causing potential crashes or core dumps. To mitigate this, memory pool and custom allocators can be used to reduce fragmentation, ensuring more efficient memory management and reducing the risk of allocation failures over time.


  1. Virtual memory is a memory management technique that creates an illusion of a larger memory space than what is physically available by using both RAM and disk storage. It allows programs to access more memory than the system physically has by mapping virtual addresses to physical memory addresses. When a program requests memory, the operating system manages the allocation, using paging2 and swapping3 to move data between RAM and disk storage as needed. Virtual memory enables multitasking, process isolation, and efficient memory usage, even in systems with limited physical RAM. 

  2. Pages are fixed-length blocks of memory used in virtual memory systems. The operating system divides both virtual memory and physical memory into pages, typically 4 KB in size, although the size can vary. When a program accesses memory, the system translates the program's virtual address into a physical address, mapping it to the corresponding page in RAM. If the page isn't in RAM, a page fault occurs, and the data is loaded from disk into memory. The division into pages allows efficient memory management and enables features like swapping. 

  3. Swaping is the process of moving data between RAM and a designated area on disk, called swap space or a swap file, when the system's physical memory (RAM) becomes full. When a program or process requires more memory than is available in RAM, the operating systems selects less frequently used data or pages, swaps them out to the disk, and frees up space in RAM for active processes. When that data is needed again, it is swapped back into RAM. Swapping helps manage memory efficiently but can slow down performance due to the slower speed of disk access compared to RAM.