Operating Systems Lecture 10: Demand Paging
Demand Paging Explained
Introduction to Demand Paging
- Demand paging is a memory management scheme that allows the operating system to use secondary storage (like disk space) when main memory is insufficient for all active processes.
- The operating system designates part of the disk as swap space, where pages of inactive processes can be stored temporarily.
Page Usage and Memory Management
- Pages actively used by the CPU must reside in main memory, while those not currently executed can be swapped out to disk.
- For example, if a process has three pages but only one is being accessed, the other two can be stored on disk until needed.
Understanding Page Faults
- A page fault occurs when the CPU attempts to access a page that is not currently in main memory.
- The presence bit in the page table indicates whether a page is in memory or swapped out; if absent, it triggers a page fault.
Handling Page Faults
- When a page fault occurs, the Memory Management Unit (MMU) raises a trap to notify the operating system that action is required.
- The OS transitions from user mode to kernel mode to handle this trap and locate the missing page on disk.
Process State During Page Fault Handling
- Upon encountering a page fault, the OS finds where the required page is stored on disk and initiates its retrieval.
- While waiting for data from disk—which takes significantly longer than CPU operations—the OS switches context to another process.
Completing Page Retrieval
- Once the requested page has been read from disk, an interrupt signals completion back to the OS.
- The OS updates its records (page table), marking that it has successfully retrieved and placed the new frame into memory.
Resuming Execution After Page Fault
- After updating necessary information about frames and present bits, processes previously blocked due to faults are marked ready for execution again.
- When scheduled again by the OS scheduler, execution resumes at the instruction where it was interrupted due to a previous fault.
Summary of Memory Access Process
- The overall process involves checking CPU cache first before accessing main memory; if there's no hit (cache miss), MMU translates virtual addresses into physical ones.
Understanding the Role of MMU in Memory Management
The Functionality of the MMU
- The main memory does not comprehend virtual addresses; this is where the Memory Management Unit (MMU) comes into play, facilitating access to memory after interactions with the CPU and cache.
- Upon a TLB hit, the MMU translates virtual addresses to physical addresses, allowing data retrieval from memory for CPU execution. A TLB miss complicates this process.
Handling TLB Misses
- In case of a TLB miss, the MMU must navigate through multiple page tables to locate the corresponding page table entry for the requested virtual address.
- If the page is present in memory, it retrieves the physical frame number and accesses data successfully.
Page Fault Scenarios
- If a valid virtual address points to a page swapped out to disk (not present in memory), a page fault occurs, prompting OS intervention to read from disk.
- After handling a page fault and loading the required page back into memory, execution resumes without further faults.
Invalid Page Access
- An invalid access results in an MMU trap to the OS, which may terminate or take corrective action if an erroneous address is accessed.
Challenges During Page Fault Handling
Managing Memory Constraints
- When servicing a page fault, if all frames are occupied in main memory, existing pages must be swapped out to create space for new ones.
Swapping Mechanism
- The operating system writes an existing page back to swap space before bringing in a new one. This involves significant overhead.
Page Replacement Policies
Identifying Pages for Swapping
- To manage limited space effectively during swaps, operating systems implement various page replacement policies that determine which pages should be evicted.
Optimal vs Practical Policies
- The optimal policy aims to remove pages least likely needed soon but is impractical due to unpredictability of future accesses.
FIFO Policy Overview
- First-In First-Out (FIFO): Evicts pages based on their arrival time; however, it may lead to removing frequently accessed pages prematurely.
Least Recently Used Policy
LRU Strategy Explained
- Least Recently Used (LRU): This common policy removes pages that have not been used recently under the assumption they will remain unused going forward.
Understanding Page Replacement Policies
Overview of Physical Memory and Page References
- Physical memory can only accommodate a limited number of pages (three frames), necessitating the eviction of pages to make room for new ones.
- The sequence of page accesses is referred to as the reference string, which illustrates how processes access different pages in memory.
Analyzing Page Faults with Different Policies
- Initial accesses (0, 1, 2) result in page faults due to cold misses; these are expected since memory starts empty.
- When accessing page 3, a decision must be made on which existing page to evict. The optimal policy looks ahead to determine the least useful page.
Optimal Policy vs. FIFO Policy
- The optimal policy evicts the farthest out unused page (page 2), resulting in fewer faults compared to FIFO.
- FIFO may perform worse than optimal due to Bella D's anomaly, where increasing pages can degrade performance if popular pages are evicted first.
Performance Comparison: LRU Policy
- LRU (Least Recently Used) generally performs better than FIFO and is often equivalent to optimal under certain conditions.
- In LRU, recently accessed pages remain in memory while older ones are considered for eviction based on past usage patterns.
Implementation Challenges of LRU
- Locality of reference suggests that recently used pages are likely to be used again soon; thus, LRU leverages this behavior for efficiency.
- Operating systems face challenges implementing LRU since they do not track every memory access directly; hardware assistance is necessary.
Access Bits and Page Management
- The MMU sets an access bit whenever a page is accessed, allowing the OS to track usage indirectly by periodically checking these bits.