Organização de Computadores – Aula 12 – Memória Cache
Organização de Computadores: Memórias Cache
In this lesson on computer organization, the focus is on how cache memories are organized and the principles guiding memory hierarchy design for both cache and virtual memory.
Cache Memory Organization with Direct Mapping
- In direct mapping cache organization, the memory location in cache where data from main memory is written is determined by the address.
- Calculating the address involves dividing the main memory size by the cache block size to determine the number of blocks.
- All address relationships in memory hierarchy must be powers of two for efficient calculations.
- Example: With 32 positions in main memory and 8 positions in cache, determining where data from a specific address will be stored in cache involves modulo division.
- Lower-order bits of addresses are used to identify positions in cache; for example, with 8 positions, 3 bits are needed.
Tag Bits and Validity Control
- Tag bits help uniquely identify which word is stored in a particular cache position based on lower-order address bits.
- Validity control bits indicate if a cache entry is valid; all start as zero when computer boots up.
- Tag bits contain remaining address information not used for position determination, aiding in identifying associated addresses.
Cache Operation Example
- The tag helps distinguish between different words stored at various positions within the cache.
Understanding Cache Memory Organization
In this section, the speaker delves into the organization of cache memory, focusing on aspects like accessing data, cache hits and misses, and the structure of cache memory in direct mapping.
Cache Access and Data Retrieval
- Cache stores information that is frequently accessed to improve processing speed.
- When a requested memory address is not in the cache (cache miss), it leads to a failure.
Direct Mapping Structure
- Direct mapping involves associating each block of main memory with only one specific cache location.
- Conflict failures occur when new data needs to be stored in an already occupied cache position.
Hardware Implementation for Direct Mapping
- Hardware for direct mapping requires tag bits for data identification and a valid bit for status checking.
- The process involves selecting cache lines using index bits and comparing tags for data retrieval.
Cache Memory Addressing and Organization
This part explores how memory addresses are structured within cache memory, including tag identification, index selection, and offset determination.
Tag Comparison Process
- Tags stored in the cache are compared with incoming address tags to identify relevant data blocks.
- Successful tag comparison along with valid bit confirmation results in a hit signal.
Address Organization in Cache Memory
- Addresses consist of index bits (line identification), tag bits (data differentiation), and offset bits (byte positioning).
- Offset aids in byte or block identification within memory words or blocks.
Memory Address Calculation and Associativity
This segment discusses how memory addresses map to cache locations based on calculations involving block numbers, indices, and tags.
Byte Offset Determination
- Byte offsets help locate specific bytes within memory blocks based on the MIPS architecture's byte-indexed system.
- For instance, a 32-bit word requires two offset bits for byte positioning.
Block Index Calculation
- The number of blocks determines index bit requirements; power-of-two block counts simplify indexing calculations.
- Factors like block count influence index bit allocation for efficient addressing schemes.
Cache Associativity Types
Different types of cache associativity models are explored here, highlighting trade-offs between flexibility and complexity.
Direct Mapping vs. Fully Associative Caches
- Direct mapping restricts each main memory block to a single corresponding cache location while fully associative caches offer more flexibility.
New Section
In this section, the speaker discusses the concept of associativity in cache memory and how it impacts cache organization.
Associativity in Cache Memory
- Associativity by set allows freedom to choose the position within a set to place data, based on the number of sets determined by cache size.
- The number of possibilities for data placement is determined by cache size and associativity level, influencing cache performance.
- Fully associative caches allow data to be placed anywhere but require comparing all tags for access, leading to increased complexity.
- Associative caches reduce conflict misses but introduce complexity in tag comparison and selection process for accessing data.
- Higher associativity decreases conflict misses but adds complexity and time in verifying correct data access.
New Section
This part delves into the implications of associativity across different memory hierarchy levels and outlines key considerations in cache design.
Memory Hierarchy and Cache Design
- Associativity principles apply across various memory hierarchy levels, impacting cache design decisions at each level.
- Four key aspects in cache design include block location determination, block replacement strategies, handling block failures, and caching policies.
- Block location determination varies based on associativity level: direct mapping offers one choice while higher associativities provide multiple options within sets or entire cache.
- Designers face trade-offs between associativity levels: higher associativity reduces conflict misses but increases implementation complexity and access time.
- Balancing different types of associativities at various cache levels is crucial for optimizing overall system performance.
New Section
This segment explores how block location is determined within caches based on different mapping techniques.
Determining Block Location
- Block location determination is influenced by the chosen mapping technique: direct mapping leads to a single choice while set associative provides multiple options within sets.
Detailed Lecture on Memory Management
In this section, the speaker discusses memory management strategies, focusing on cache utilization and replacement policies.
Cache Utilization Strategies
- When a position in memory is empty, it's ideal to utilize the least recently used (LRU) principle to determine which data to store.
LRU Implementation Challenges
- High associativity in hardware makes implementing LRU costly. Some studies suggest that random replacement can approximate LRU behavior effectively in caches.
Memory Hierarchy Policies
- Two main write policies are discussed: Write Through (WR through), where data is written through all memory levels, simplifying updates but with high cost per write instruction.
Write Strategies Comparison
- WR back strategy updates only the next level, making it faster but leading to inconsistencies across memory hierarchy layers.
Virtual Memory Considerations
- Due to the slow nature of hard disks, virtual memory systems opt for Write Back (WR back) strategy. This contrasts with cache memories that may use WR through or WR back approaches.
Key References for Further Study
The lecture concludes by referencing specific chapters from Henes and Patterson's book for additional reading material.