Operating Systems Lecture 13: Locks

Operating Systems Lecture 13: Locks

Understanding Logs and Mutexes in Multithreading

Introduction to Logs

  • The lecture focuses on understanding logs in detail, particularly in the context of multithreading and shared data access.
  • It highlights the issue of race conditions when multiple threads increment a shared variable (e.g., balance), leading to incorrect values.
  • A special log variable, referred to as a mutex, is introduced to provide mutual exclusion during critical sections.

Mutex Functionality

  • The process involves declaring a mutex, calling the lock function before accessing critical sections, and unlocking it afterward.
  • Only one thread can own the mutex at any time; other threads attempting to acquire it will block until it's released.
  • Libraries like pthread provide various types of locks for user programs to ensure mutual exclusion and atomicity.

Goals of Lock Implementation

  • Key goals for implementing locks include guaranteeing mutual exclusion, fairness among threads, and low overhead in resource usage.
  • Fairness ensures that all waiting threads get a chance without starvation; low overhead minimizes CPU and memory consumption during locking operations.

Challenges with Disabling Interrupts

  • Implementing locks requires both software logic and hardware support; simply disabling interrupts isn't sufficient for effective locking mechanisms.
  • Disabling interrupts could lead to issues if user programs misuse this power or run indefinitely without re-enabling them.

Limitations of Simple Locking Mechanisms

  • This method fails on multi-core systems where one thread may still access shared data while another is locked on a different core.
  • The approach is only viable in single processor systems or trusted code within operating systems but not suitable for general user programs or multi-threaded environments.

Failed Implementation of Locking Mechanism

Overview of the Flag Variable as a Lock

  • The concept introduced is a simple locking mechanism using a flag variable, where setting the flag to 1 indicates that a lock is held, and setting it to 0 releases the lock.

Spin Waiting for Lock Acquisition

  • When a thread attempts to acquire the lock, it checks the flag. If the flag is 1 (indicating it's held), it enters a busy-wait loop until the flag becomes 0.

Race Condition in Lock Acquisition

  • A race condition arises when two threads attempt to set the flag simultaneously. If one thread sees the flag as 0 and sets it while being interrupted, both threads may end up acquiring the lock.

Consequences of Race Conditions

  • This leads to multiple threads entering critical sections without mutual exclusion because they both believe they have acquired the lock due to timing issues during their execution.

Need for Atomic Instructions

  • To prevent such race conditions, hardware support through atomic instructions is necessary. These allow operations like checking and setting values without interruption.

Atomic Instructions: Test and Set

Functionality of Test and Set Instruction

  • The test-and-set instruction reads an old value, sets it to a new value, and returns the old value in one atomic operation, preventing interruptions during execution.

Implementing Locks with Test and Set

  • In this implementation, if test-and-set returns 1 after trying to set the flag to 1, it indicates that another thread holds the lock. The current thread will then wait until it can successfully set it from 0 to 1.

Behavior of Threads After Acquiring Locks

  • Once a thread acquires a lock using test-and-set by changing its state from zero to one atomically, other threads will continue spinning in their loops until they can acquire access.

Comparative Analysis: Compare And Swap

Introduction to Compare And Swap (CAS)

How to Implement Locks with Compare and Swap?

Understanding Spin Locks

  • A spin lock is implemented using Compare and Swap (CAS). The process involves checking if the old value of a flag is zero, setting it to one, and confirming that the CAS operation returns zero, indicating successful lock acquisition.
  • If the CAS returns one, it indicates that another thread holds the lock. In this case, the current thread enters a busy-wait loop until it can acquire the lock.

Alternatives to Spin Locks

  • Continuous spinning wastes CPU resources; therefore, an alternative approach allows threads to yield control when they cannot acquire a lock. This method enables threads to sleep or block instead of spinning.
  • Such locks are referred to as sleeping mutexes. By default, "mutex" implies a sleeping mutex unless specified otherwise.

Implementation of Sleeping Mutexes

  • In a sleeping mutex implementation, if a thread finds that the flag is already set (value of one), it yields control back to the operating system instead of continuing in a busy loop.
  • Yielding effectively moves the process from running state to ready state, allowing for later rescheduling by the OS without wasting CPU cycles.

Choosing Between Spin Locks and Sleeping Mutexes

  • For user-space programs, sleeping mutexes are generally preferred over spin locks due to their efficiency in conserving CPU cycles while waiting for locks.
  • However, both types can be used depending on specific requirements; it's essential to consider context when choosing between them.

Locking Mechanisms in Operating Systems

  • Inside operating systems, only spin locks should be used because yielding control would disrupt hardware operations. The OS must maintain continuous execution without yielding.
  • Kernel code relies on spin locks for protecting shared data structures across multiple processes running in kernel mode on different cores.

Precautions When Using Spin Locks

  • When acquiring a spin lock within kernel code, interrupts must be disabled on that CPU core. This prevents complications such as deadlocks caused by interrupt handlers requesting the same lock.
  • It’s crucial not to perform blocking operations while holding a spin lock; doing so could lead other threads into indefinite spinning states waiting for access.

Kernel and User Space Locking Mechanisms

Importance of Spin Locks in Kernel Development

  • Spin locks are crucial in kernel code; they should be released as soon as possible to avoid blocking or interrupting processes.
  • In user space programs, particularly with C and pthread locks, developers can utilize sleeping mutexes, allowing threads to sleep while holding a lock without risking deadlocks.

Guidelines for Using Locks

  • When multiple threads access shared data (e.g., counters, data structures), it is essential to always hold a lock to prevent concurrent access issues.
  • Thread-safe data structures ensure that operations like insertions and deletions are protected by internal locking mechanisms, providing mutual exclusion.

Coarse Grain vs. Fine Grained Locking

  • Coarse grain locking involves using a single lock for an entire data structure (e.g., a linked list), simplifying management but potentially limiting parallelism.
  • Fine grained locking allows individual elements of a data structure to be locked separately, enhancing performance through increased parallelism but complicating the implementation.

Performance Considerations

  • While fine grained locking improves performance by allowing more concurrent operations, it requires careful management of multiple locks.
  • Coarse grain locking is easier to implement since it uses one lock for all operations but may lead to bottlenecks due to reduced concurrency.

Responsibility of User Programs

Video description

Based on the book Operating Systems: Three Easy Pieces (http://pages.cs.wisc.edu/~remzi/OSTEP/) For more information please visit https://www.cse.iitb.ac.in/~mythili/os/