L-3.1: Process Synchronization | Process Types | Race Condition | Operating System-1
Process Synchronization in Operating Systems
Understanding Process Synchronization
- Process synchronization is a crucial topic in operating systems, focusing on how multiple processes can run concurrently without conflict.
- There are two execution modes: serial mode (one process at a time) and parallel mode (multiple processes simultaneously).
- In modern systems, parallel execution is common due to multiprogramming and time-sharing environments, leading to potential conflicts between processes.
Types of Processes
Cooperative vs. Independent Processes
- Processes can be categorized into cooperative and independent types. Cooperative processes affect each other's execution, while independent processes do not.
- Cooperative processes share resources such as variables, memory, or code; their execution can impact one another.
- Independent processes operate without shared resources; for example, transactions on different servers do not interfere with each other.
Importance of Process Synchronization
Risks of Non-Synchronization
- The need for process synchronization arises from the potential issues that can occur when cooperative processes interact improperly.
- If cooperative processes are not synchronized correctly, they may lead to inconsistent states or deadlocks.
Analogy for Understanding
- An analogy compares unsynchronized cooperative processes to a guitar played by someone untrained—without proper coordination, it produces noise instead of music.
Example of Cooperative Processes Issues
Shared Variables Problem
- A practical example involves two concurrent processes (P1 and P2), both accessing a shared variable named "shared."
- The shared variable is initialized with a value (e.g., 5), but simultaneous access by both P1 and P2 can lead to unpredictable results if not managed properly.
Constraints in Execution
Understanding Process Synchronization and Race Conditions
Process Execution and Pre-emption
- In a uni-processor system, processes P1 and P2 can run in parallel, but only one will get CPU access at a time. If P1 gets the CPU first, it executes its first instruction:
Shared = X, assigning the value of shared to X.
- The initial value assigned to X is 5. After executing
X++, the value of X increments to 6.
- The third instruction involves a
Sleepcommand, which pauses the running process. This allows for pre-emption where the CPU can switch to another process.
Understanding Sleep and Context Switching
- The
Sleep(1)command indicates that process P1 will pause for 1 second. This is a method of implementing pre-emption in code.
- While P1 is paused, the CPU does not remain idle; it looks for other processes ready to run.
- When P2 becomes active from the ready queue due to context switching, it starts executing its instructions with
Shared = y, where y also receives the value of 5.
Continuation of Processes
- As P2 executes its second instruction (
y--), Y's value decreases to 4. It then encounters anotherSleep(1)command.
- Similar to before, while P2 sleeps, the CPU continues searching for other processes that could be executed.
- Once P1's sleep period ends after one second, control returns back to it allowing execution from its fourth instruction.
Finalizing Process Execution
- Upon resuming execution, P1 updates the shared variable with its current local variable (X = 6). Thus, shared now equals 6 after all statements are executed by P1.
- After termination of process P1 and resource reclamation by the CPU, if no other processes are pending, it would become idle.
Race Condition Analysis
- Since process P2 was still sleeping during this time (for 1 second), once resumed after sleep completion, it continues from where it left off (fourth instruction).
- At this point in execution for process P2, Y’s final value is updated again leading shared variable to become 4 instead of expected 5 due to concurrent modifications by both processes.
Implications of Non-Synchronization
- The results indicate an inconsistency: adding and subtracting should yield original values (5), but here we see outcomes as either 4 or 6 depending on execution order—this exemplifies a race condition scenario between two cooperative processes that lack synchronization mechanisms.
- Both outcomes (4 or 6) highlight issues arising from unsynchronized access; thus emphasizing why synchronization is crucial when multiple processes operate concurrently on shared resources.
Need for Synchronization Mechanisms
- To prevent race conditions like these in future implementations, various synchronization methods such as semaphores or locking mechanisms must be employed effectively within cooperative processes.