Operating Systems Lecture 12: Introduction to threads and concurrency

Operating Systems Lecture 12: Introduction to threads and concurrency

Understanding Threads and Concurrency

Introduction to Single-Threaded Programs

  • The upcoming lectures will focus on threading and concurrency, starting with a review of single-threaded programs.
  • A single-threaded program has one thread of execution, where the CPU executes instructions sequentially from the program code.

Transition to Multi-Threading

  • In multi-threaded programs, multiple threads execute simultaneously, each having its own program counter while sharing the same code and memory space.
  • Each thread operates independently at different points in the code but shares resources like heap and stack.

Differences Between Processes and Threads

  • Unlike processes that have separate memory images (e.g., process P and child C), threads share the same address space which simplifies communication.
  • Inter-process communication (IPC) is complex; threads can easily communicate through shared global variables.

Advantages of Using Threads

  • Creating threads allows for running multiple versions of a program concurrently without duplicating memory usage.
  • Threads facilitate easier communication since they can access shared variables directly, unlike processes that require IPC mechanisms.

Understanding Parallelism vs. Concurrency

  • Running multiple versions of a process enhances performance by utilizing all CPU cores effectively.

Understanding Concurrency and Parallelism in Computing

What is Concurrency?

  • Concurrency allows multiple processes to run simultaneously on a single CPU core by interleaving their executions. This means that a few instructions from one process are executed, then the context switches to another process, creating an illusion of parallel execution.

Distinction Between Concurrency and Parallelism

  • While concurrency involves interleaved execution on a single core, parallelism refers to multiple processes or threads running simultaneously across different CPU cores. This enables true simultaneous execution.

Importance of Threads in Single-Core Machines

  • Even on single-core machines, using threads is beneficial as they allow for concurrent execution. If one thread blocks (e.g., waiting for disk I/O), another thread can continue executing, preventing CPU idleness.

Thread Execution and Performance

  • In scenarios where one thread blocks due to system calls, having multiple threads allows the program to maintain performance by switching between them during idle times. This interleaving enhances overall efficiency.

Operating System's Role in Thread Management

  • The operating system treats each thread as an independent entity similar to processes. Each thread has its own Thread Control Block (TCB), allowing the OS to manage context switching effectively.

Kernel Threads vs User-Level Threads

Kernel Threads Explained

  • Kernel threads are scheduled independently by the operating system kernel. When a program creates multiple threads, these are typically treated as separate kernel-level entities that can be managed individually.

User-Level Threads Overview

  • Some libraries provide user-level threads which do not correspond directly to kernel-level threads. These user-level implementations create an illusion of many threads while being multiplexed over fewer kernel threads.

Advantages and Disadvantages of User-Level Threads

  • User-level threads have lower overhead since they avoid kernel involvement during context switches; however, they cannot run in parallel because the kernel only recognizes them as a single thread.

Practical Example: Creating Threads with Pthreads

Code Implementation Using Pthreads Library

  • A practical example demonstrates how to create multiple threads using the Pthreads library in C programming within Linux environments. The pthread_create function initiates new threads for concurrent execution.

Functionality of Created Threads

  • In this example, two additional threads (T1 and T2) are created alongside the main process (P). Each thread executes a specific function that prints its designated argument when started.

Synchronization with Join Function

Understanding Multi-threading and Race Conditions

Scheduling Entities in Multi-threading

  • The operating system schedules three entities concurrently, allowing for various execution orders (e.g., T2 before T1).
  • Multiple threads are created to perform tasks together, often sharing a global variable like a counter.

Thread Execution and Expectations

  • Two threads (T1 and T2) increment a shared counter 10 million times each.
  • One might expect the final value of the counter to be 20 million, but running the code multiple times yields different results.

Understanding Race Conditions

  • The unexpected behavior is not due to bugs but rather inherent in multi-threaded programming when accessing shared data.
  • A simple increment operation involves multiple instructions, which can lead to issues when executed by multiple threads simultaneously.

Example of Race Condition

  • When two threads read and increment the same counter value concurrently, they may both write back an incorrect result (e.g., both writing 51 instead of 52).
  • This overlapping execution leads to non-deterministic outcomes known as race conditions.

Critical Sections and Mutual Exclusion

  • Code segments that can cause race conditions are termed critical sections; they require mutual exclusion.

Locking Mechanisms in Multi-threaded Programming

Importance of Locking

  • Interruptions can occur at any time during program execution, necessitating a special mechanism to manage these interruptions effectively.
  • The upcoming lecture will focus on the concept of locking, which is crucial for writing multi-threaded code correctly.
  • Without mechanisms like locks, it becomes nearly impossible to handle multi-threaded programming accurately.
  • Race conditions are a significant concern in multi-threaded environments; they can lead to unpredictable behavior if not managed properly.
Video description

Based on the book Operating Systems: Three Easy Pieces (http://pages.cs.wisc.edu/~remzi/OSTEP/) For more information please visit https://www.cse.iitb.ac.in/~mythili/os/