Operating Systems Lecture 9: Paging

Operating Systems Lecture 9: Paging

Paging: A Key Memory Management Technique

Introduction to Paging

  • The lecture introduces paging as the most common memory management technique in modern operating systems, contrasting it with simpler techniques like base and bound.
  • Paging involves dividing a process's memory image into fixed-size chunks called pages, rather than allocating all memory as one chunk.

Structure of Paging

  • Physical memory is also divided into frames of the same size, allowing logical pages to be mapped to physical frames (e.g., page 0 maps to frame 3).
  • This structure eliminates external fragmentation but introduces a smaller issue known as internal fragmentation.

Internal Fragmentation Explained

  • Internal fragmentation occurs when a process requests less memory than a full page size (e.g., requesting 5 bytes results in allocation of an entire page).
  • Typical page sizes are around 4KB, leading to potential waste of space within allocated pages.

Page Table Overview

  • Each process has its own page table that facilitates virtual-to-physical address translation.
  • The page table is essentially an array mapping virtual page numbers to physical frame numbers, stored as part of the OS kernel code.

Page Table Entries and Their Functions

  • Each entry in the page table corresponds to a virtual page and contains critical information such as the physical frame number where that virtual page is stored.

Understanding Address Translation and Page Tables

Overview of Page Table Entries

  • The page table entry contains the frame number, which is crucial for address translation, along with additional information about each logical page.
  • A virtual address can be divided into a page number and an offset; for example, in a 10-byte page, the virtual address 112 has a page number of 11 and an offset of 2.

Role of the MMU in Address Translation

  • The Memory Management Unit (MMU) extracts the virtual page number and offset to map it to a physical frame number using the page table.
  • The operating system provides only the starting point of the page table to the MMU instead of all entries, allowing efficient access based on virtual page numbers.

Memory Access Process

  • When a CPU requests data from memory using a virtual address, it first goes through the MMU for translation before accessing physical memory.
  • This process introduces overhead as every memory access requires looking up the corresponding entry in the page table via the MMU.

Impact of Paging on Performance

  • Paging increases memory accesses since each request involves fetching from both the TLB (Translation Lookaside Buffer) and potentially accessing memory again for the actual data.
  • To mitigate this overhead, modern systems implement caching mechanisms like TLB to store recent mappings from virtual addresses to physical addresses.

Functionality and Management of TLB

  • The MMU first checks if an entry exists in TLB; if not found (a TLB miss), it must access memory to retrieve necessary information from the page table.
  • Locality of reference allows programs to reuse recently accessed instructions or data frequently, enhancing cache effectiveness by reducing unnecessary lookups.

Context Switching and TLB Management

  • Each process has its own unique set of mappings stored in TLB; thus, managing these entries during context switches is essential for performance.
  • The complexity involved in maintaining high hit rates within limited TLB entries is primarily handled by hardware rather than operating systems.

Storage Considerations for Page Tables

Understanding Page Tables and Memory Management

Total Number of Page Table Entries

  • The total number of page table entries is calculated by dividing the 32 addresses by 2^12, resulting in 2^20 logical pages. Each page table thus contains 2^20 entries, leading to a size of four MB for each page table due to four bytes per entry.

Memory Allocation for Page Tables

  • Every process requires four MB of memory solely for its page table, which raises questions about how operating systems manage this allocation efficiently. Even small executables necessitate this large memory set aside for their respective page tables.

Multi-Level Page Tables

  • To address the large memory requirement, operating systems implement multi-level page tables where the main page table is divided into smaller chunks or pages. This allows more efficient use of memory as these chunks are stored separately rather than as one large entity.

Structure of Multi-Level Page Tables

  • In a multi-level setup, an outer page table (or directory) keeps track of the frame numbers corresponding to these smaller chunks. Accessing a specific entry involves first referencing the outer page table before navigating through inner levels to find the desired frame number.

Address Translation Process

Video description

Based on the book Operating Systems: Three Easy Pieces (http://pages.cs.wisc.edu/~remzi/OSTEP/) For more information please visit https://www.cse.iitb.ac.in/~mythili/os/