Operating Systems Lecture 11: Memory allocation and free space management algorithms
Memory Allocation Algorithms
Understanding Variable Size Allocation
- The lecture focuses on memory allocation algorithms, specifically how free space is managed and how memory is allocated and freed in variable sizes.
- The C library allocates a page of memory from the operating system's heap using system calls like BRK or M-map, then provides smaller chunks to user programs via
malloc.
- Memory consists of various chunks that are either allocated or free. This management is crucial for both the C library and the kernel to handle
mallocandfreerequests effectively.
Memory Management in the Kernel
- The kernel also requires memory allocation for its internal data structures, such as Process Control Blocks (PCBs), necessitating a solution for variable size data structure management.
- A simple implementation of
mallocinvolves identifying an appropriate chunk of memory when a request (e.g., 20 bytes) is made by a user program.
Header Information in Allocated Chunks
- Each allocated chunk includes a header that stores metadata like the size of the chunk, which is essential for correctly freeing memory later.
- The header may also contain additional information such as magic numbers to ensure integrity and prevent overflow issues from previous chunks.
Managing Free Space with Free Lists
- Free space in the heap is managed through a free list where each free chunk contains pointers to subsequent free chunks, allowing efficient tracking of available memory.
- If only one free chunk exists, its next pointer will be null; otherwise, it points to another free chunk's address without needing extra memory allocation.
Example of Memory Allocation Process
- When allocating memory (e.g., 100 bytes), the head pointer moves down to reflect this allocation while keeping track of remaining free space.
- As more allocations occur, if a middle chunk becomes freed by the user, it creates new elements in the free list that can be utilized for future allocations.
Understanding Memory Fragmentation and Allocation Strategies
The Problem of Fragmentation
- Memory fragmentation occurs when free memory chunks are scattered in different locations, making it impossible to satisfy larger allocation requests despite having enough total free memory.
- User programs often create many small holes in memory due to variable size allocations, leading to fragmentation. Effective memory allocation algorithms should aim to minimize this issue.
Free Chunk Management
- When multiple allocated chunks are freed, they can form a tangled list of free chunks. This complicates the management of available memory.
- Ideally, after freeing chunks, there should be a mechanism to merge these smaller free chunks back into larger ones to reduce fragmentation.
Coalescing Techniques
- Coalescing is essential for managing fragmented memory. It involves merging adjacent free chunks back into larger blocks after they have been freed.
- One effective method for coalescing is the buddy allocator system, which allocates memory in sizes that are powers of 2 (e.g., 4KB, 8KB).
Advantages of Buddy Allocation
- The buddy allocator simplifies splitting and merging processes by ensuring that when two adjacent blocks are free, they can easily combine into a larger block.
- This strategy allows for efficient use of space and reduces fragmentation by maintaining manageable chunk sizes.
Allocation Strategies Overview
- Different strategies exist for selecting which free chunk to allocate from based on the request size:
- First Fit: Allocates the first chunk that meets or exceeds the requested size.
- Best Fit: Chooses the smallest available chunk that satisfies the request.
- Worst Fit: Selects the largest available chunk, leaving potentially more usable space afterward.
Pros and Cons of Allocation Strategies
- Each strategy has its advantages; for instance:
- Best fit may leave behind unusable small fragments.
- Worst fit can result in larger remaining chunks that might be more useful for future requests.
Fixed Size Allocations vs Variable Size Allocations
- Fixed-size allocations simplify management as they avoid issues related to fragmentation seen with variable-sized allocations.
Kernel Memory Management: Understanding the Slab Allocator
Overview of Kernel Data Structures and Caching
- The kernel utilizes various data structures, such as the Process Control Block (PCB), and maintains caches for these objects. Each cache consists of slabs, which are large memory chunks (one or more pages).
- Each slab is dedicated to objects of a specific size. When a request for a PCB arises, the kernel allocates it from the designated PCB slab, ensuring efficient memory management.
Advantages of the Slab Allocator
- The slab allocator allows fixed-size allocations within each cache, significantly reducing fragmentation. This means that when an object is freed, there are no small gaps left in memory.
- The kernel primarily uses slab allocation for smaller-sized objects like PCBs. Variable size allocations are only considered when fixed-size options are insufficient.
User Programs and Fixed Size Memory Allocation