OS Tutorial (Gate Wallah)
# Operating System One Shot | CS & IT Engineering | Maha Revision | Target GATE 2025
Section titled “# Operating System One Shot | CS & IT Engineering | Maha Revision | Target GATE 2025”Operating System
Section titled “Operating System”Operating System -> It is a Software abstracting hardware
- It is a Interface between user and hardware
Services of Operating Systems
-
User Interface
- Program Execution ⭐
-
I/O Operation
- File-System Manipulation
- Inter Process Communication
- Error Detection Allocation
- Accounting
-
Protection & Security
OS works Overall:
1. Resource Management
-
Managing CPU, main memory, and storage allocation for multiple tasks and processes, avoiding conflicts and ensuring fairness.
-
Includes: Memory Allocation==, ==Virtual Memory==, ==Paging==, ==Segmentation==, ==TLB==, Cache Management, Swapping, Memory Protection, ==Contiguous & Non-contiguous allocation, Thrashing control.
2. File Management
-
Maintaining file system structure and performing file operations (create, read, update, delete) with access control and security.
- Includes: Directory Structure, File Allocation Methods (Contiguous, Linked, Indexed), Inodes, Journaling, Access Permissions, Free-space Management, Mounting & Unmounting, Backup & Recovery.
3. Process Management
-
Creating, scheduling, executing, and terminating processes including context switching and inter-process communication to ensure efficient multitasking.
-
Includes: PCB==, ==Scheduling Algorithms (FCFS, SJF, RR, Priority, Multilevel Queue, Multilevel Feedback Queue)==, ==Context Switching==, Threads, IPC (Pipes, Shared Memory, Message Passing, Semaphores), Deadlock (Detection, Prevention, Avoidance, Banker’s Algorithm), ==Starvation, Critical Section, Synchronization.
4. Device Management
-
Managing communication between hardware devices and software by handling I/O operations through device drivers and buffering.
-
Includes: Device Drivers, Interrupt Handling==, ==DMA (Direct Memory Access)==, Buffering, Spooling, I/O Scheduling, Polling vs Interrupts, ==Disk Scheduling Algorithms (FCFS, SSTF, SCAN, C-SCAN, LOOK, C-LOOK).
Types of OS:
- Uni-Programming OS
- Degree of Multiprogramming = 1
Main Memory -> Only one process at a time [OS|P1]
- Multi-Programming OS
Main Memory -> More than one process at a time [OS | P1 P2 ...]⭐- Degree of Multiprogramming > 1
- CPU Scheduling and Better Resource Utilization (like when P1 doing I/O operation, Execute P2)
- Multi-Tasking OS
-
Type of Multi-Programming== in Which Scheduling Process is fixed i.e. ==Round Robin ⭐
-
Note: Single Processor/CPU can run one Process at a time.
Type of Process:
- Preemptive -> OS can terminate
- Non-Preemptive -> Can’t be Stop without process itself
Degree of Multi Programming -> No. of Processes in Main Memory (other then OS processes)
-
Degree ∝ Efficiency (Upto Thrashing)
Thrashing occurs in Operating Systems (mainly in virtual memory systems) when The system spends more time swapping pages in and out of memory than executing actual processes.
CPU Utilization^| λ| / . \| / . \| / . \| / . \| / . \| / . \| / . \+--------------------------------------------> Degree of Multi-Programming Thrashing RegionParts of OS:
( Shell ( Kernel ) )-
Shell== → ==Outer layer of OS==,==provides interface to user via GUI or CLI
-
Kernel== → ==Core of OS==, directly ==interacts with hardware; manages CPU, memory, processes, I/O, and devices
System Call: -> A way for program to interact with the operating system.
-
It protect privileged systems like files from program, and program need to request Call and Request OS
- Dual Mode of Operation : Transition from user to kernel mode
User Process: User Process executing ---> Call Systems call ---> Return from system call ⬊ ⬈x-----------------------------------------------------------------------------xKernel : ⬊ ⬈ Mode bit =0 Return mode bit = 1 ⬊ ⬈ Execute system Call-
User mode ( mode bit = 1), Kernel mode (mode bit = 0)
Processes
Section titled “Processes”Processes -> Program under execution
Representation of Processes ⭐
Main Memory Program P1┌──┐ . ┌───────┐|OS| . | Stack | -> Local Var, Function Param, return addr.├──┤..... ├───↓───┤|P1| -> ├───↑───┤├──┤..... | Heap | -> Dynamic Memory Allocation│ │ . ├───────┤└──┘ . | Data | -> Static and Global Variable ├───────┤ | Code | -> Program Instructions └───────┘Ques: Calling malloc() execute System Call ? ⭐
-
-> No, malloc uses some area from heap, if heap is already given by OS to program, then there is no need to call system call for using heap
Program Counter (PC)-> Store address of Next Instruction -> Program Code Part
┌────────┐| CPU |│ ┌────┐ ││ | PC | │| └────┘ |└────────┘Ques: If a process P is running, which section of the process’s what address will be stored in PC for the process P
- -> PC will store address of next instruction, i.e. Some address of Code Section of the Process P1 inside Main Memory
Ques: In which Section of the Process P, the program counter value would be saved
- -> If some function call or Interrupt happens, PC value is stored Inside Stack of Process P
PCB -> Process Control Block
-
When Process load into CPU Process content like (instructions, Variables) not goes into CPU, only PCB goes into CPU== that ==stores Process ID, Process current state, Process Priority etc.)
Main Memory OS . ┌──────────────────┐┌──┐ . . | PCB P1 [.....] ||OS|----> | PCB P2 [.....] |├──┤ . . | PCB P3 [.....] ||P1| . └──────────────────┘├──┤└──┘
Attributes of PCB[ PID, PC, GPR, List of Devices, Type, Size, Memory Limits, Priority, State, List of Files]Note:- PCB is maintained in OS for all states (new, ready, running, blocked/wait, suspended, terminated until cleanup). It is not limited to only ready and wait.
Context Switching:
-
The content of PCB of a process are collectively knowns as ‘Context’ of that process
- Done By a OS’s Program known as
Dispatcher - Step 1: Current Running process
Contextfrom CPU come back to its corresponding PCB - Step 2: Next Process’s Context from PCB load into CPU
Context Switch:
CPU OS┌──────────┐ ┌──────────────────┐| [PC][AR] | -> | PCB P1 [.....] ||[..] [] []| <- | PCB P2 [.....] |└──────────┘ ├──────────────────┤ | P1 | ├──────────────────┤ | P2 | └──────────────────┘Ques : Is context switch same as preemption.
-
-> It is not necessary to have preemption for Context Switch, because preemption is forcefully unload running process== while, without preemption, if a ==Processing is going to do I/O there can be context switch
Process States : ⭐⭐
preemption ┌---------------------┐ | | admitted ↓ Execute | exit{New}---------> {Ready} -------> {Running} ------------> {Terminated} ↑ | I/O or event done | | waiting for I/O or event └--{Waiting/Blocked}←-┘ reserved process by devices1. Process Lifecycle (Basic Flow):
{New} ---------> {Ready} ---------> {Running} ---------> {Terminated} admitted scheduled exit2. I/O & Preemption Flow:
preemption ┌--------------┐ | | ↓ Dispatch | {Ready} ----------> {Running} ↑ |event └--{waiting}---┘ event waitcompletionSchedule & Dispatcher:**
- Scheduler: Chooses which process in Ready will get CPU next. Uses policies like FCFS, SJF, RR, Priority.
- Dispatcher: Performs context switch to transfer CPU to selected process. Handles:
- Ready → Running (process starts execution)
- Running → Waiting/Blocked (process waits for I/O/event)
- Running → Ready (preemption occurs, e.g., time slice ends or higher-priority process arrives)
- Responsibilities:
- Executes CPU allocation decisions of the scheduler
- Handles preemption
- Performs context switching
Main Memory vs CPU stored state: ⭐
Running-> Stored in both CPU & Main Memory-
Ready, Blocked== -> ==Stored in Main memory Only
New & Terminated-> Not stored
Two Transition are Voluntary by Process: ⭐
-
Running to Terminated
-
Running to Blocked
Process Types Based on Resource Usage ⭐
-
CPU Bound - If the process is intensive in terms of CPU operations
-
IO Bound - If the process is intensive in terms of IO operations.
Note:
- A good blend of CPU Bound & IO Bound are made for better utilization
- Running Process Can take only two decision by itself,
exitandwaiting for I/O or event. and all other are decide by OS
Scheduling Queues: -> Queues of PCB’s of different state of process ⭐
-
Job Queue -> New State
-
Ready Queue -> Ready State
-
Device Queue -> Blocked State
Types of Schedulers: (**Effect on Degree of Multiprogramming) ⭐
1. Long-Term Scheduler (Job)== -> ==Decide process From New to Ready state
-
Can Increase Degree of Multiprogramming
2. Short-Term Scheduler (CPU)== -> ==Decide process From Ready to Running state -> CPU Scheduling Algorithms ⭐
-
No effect on Degree of Multiprogramming X
3. Mid-Term Scheduler(Medium Term)== -> Swap out Process ==(Ready or Blocked)== from Main memory to new suspended state ==(Suspended Ready or Suspended Blocked)-> Swapping
-
Increase/Decrease (Swap in/out) Degree of Multiprogramming
-
Explanation: It temporarily removes a process from main memory to control the load. When the system is overloaded, it swaps out a Ready/Blocked process to secondary storage (Decrease Degree of Multiprogramming)== and puts it in Suspended Ready/Suspended Blocked. When memory pressure reduces, it ==swaps in the suspended process back to Main Memory Ready/Blocked State (Increase Degree of Multiprogramming).
- It stabilizes memory usage and keeps the system responsive by balancing how many processes actually reside in main memory at a time.
┌-resume--> // other states connected{Suspended Ready} {Ready} ^ <-suspend-┘ | | | ┌-suspend-->{Suspended blocked} {Blocked} <-resume-┘Effect of Scheduler on Degree of Multiprogramming:
- Long Term -> Can Increase
- Short Term -> No effect X
- Mid-Term -> Increase/Decrease (Swap in/out)
Note: Swapping is also called Roll Out and Roll In if it is based on priority in OS.
CPU Scheduling
Section titled “CPU Scheduling”CPU Scheduling:
- Function -> Make a Selection
- Goal -> Minimize waiting and turn around time, Maximize CPU utilization
Scheduling Times
- Arrival Time (AT),- Burst Time(BT),- Completion Time (CT),- Turnaround Time (TAT) = CT- AT ⭐- Waiting Time (WT) = TAT - BT ⭐- Burst time is total CPU time required by the process to complete its execution, not the time when the process starts
CPU Scheduling:
- Preemptive
- Non-Preemptive
Scheduling Algorithms:
1. FCFS (First Come First Serve) -> Non Preemptive
- Criteria -> Smaller AT first (Tie breaker -> Smaller process_id)
-
Problem -> Convoy Effect :== if a ==slow moving process schedule first, it will lead to all fast process to suffer.
2. SJF (Shortest Job First) -> Non Preemptive
- Criteria -> Smallest BT first (Tie breaker -> FCFS)
3. SRTF (Shortest Remaining Time First) -> Preemptive ⭐
- Criteria -> Smallest BT first (Tie breaker -> FCFS)
- In SRTF whenever a new process arrive at
AT, we will compare it with current process if it have less remaining burst time, process it first.
4. Priority based scheduling -> Preemptive/Non-Preemptive
- Criteria -> Priority static/dynamic (Tie breaker -> Given in Ques)
-
Problem -> Starvation:== If ==lower priority waiting for too much (Solution-> aging (only for dynamic priority))
5. Round-Robin -> Preemptive ⭐
-
Criteria -> Smaller AT first== + ==Time slice (Quantum)
- If two process coming to ready state, one from New State and another from CPU, New process will be forward in queue.
- Quantum Value (Q) :
- Very very Small -> No Efficiency,
-
Small -> Interactive,
- Large -> Less Interactive,
- Very very Large -> RR degrades to FCFS
Note:⭐
- Learn to make
Gantt Chartfor analyze process scheduling. - Most question are to find Average TAT or Average WT
6. Multilevel Queue (MLQ) Scheduling:
- Different Queue Type of Processes
System Processes ↓Foreground Processes ↓Background Processes- Scheduling Among Queues:
- Fixed priority Preemptive scheduling method :
- Different Queue for Different Priority Processes, Higher Priority Queue Process will get CPU first
- Even if Lower Priority Process let Foreground Processes, are in CPU and if some process enter in ready state of Higher Priority Process ex: System Process Queue, Foreground process will be preempt and System Process will be processed first
-
Problem : Starvation of Low Priority Processes
- Time Slicing
- Each Type of Process will be given certain Time limit. For ex in every 100ns, 60ns , 30ns and 10ns are given to System, Foreground and Background Process respectively
- Problem : The type of process with more time slice will be given more time
- Fixed priority Preemptive scheduling method :
7. Multilevel Feedback Queue(MLFQ) Scheduling ⭐
- Different Queue type of Processes with Flexible Priority
-
Criteria for Priorities of Queue can be changed based on Feedback
-
Round Robing is Used in Each Priority Queue
- It is used in Modern day computers ⭐
┌─ System Processess└─--------------------┐Foreground Processes ←┘┌─-----------------------┐└─ Background Processes ←┘Advantage and Disadvantage of Each Scheduling Algorithm ⭐
- FCFS :
-
Advantage : Easy to Implement, No complex Logic, No starvation)✅ ,
-
Disadvantage : No Option of Preemption, Convoy Effect Makes System Slow) ❌
-
- SJF :
- Advantage : Minimum average waiting time among non-preemptive scheduling, Better throughput in continue run) ✅ ,
-
Disadvantage : No Practical Implementation because burst time is not known in advance, No option of preemption, Longer Processes may suffer from starvation) ❌
- SRTF :
- Advantage : Minimum Average waiting time among all algorithms, Better throughput in continue run) ✅,
-
Disadvantage : No practical implementation because Burst time is not known in advance), (==Longer Processes may suffer from starvation) ❌
- Priority Based Algorithm :
-
Advantage : Better response for real time situations)✅,
-
Disadvantage : Low Priority Processes may suffer from starvation) ❌
-
- Round Robin :
-
Advantage : All processes execute one by one, so no starvation==, ==Better interactive-ness, Burst time is not required to be known in advance) ✅,
-
Disadvantage : Average waiting time and turnaround time is more, Can degrade to FCFS) ❌
-
Synchronisation ⭐
Section titled “Synchronisation ⭐”Type of Processes:
- Independent -> Process doesn’t communicate with other processes
- Cooperating/Coordinating/Communicating -> Process communicate with each other
Problems without Synchronization:
- Inconsistency
- Loss of Data
- DeadLock
Race Condition== -> A race condition is an ==undesirable situation==, it occurs ==when the final result of concurrent processes depend on the sequence in which the processes complete their execution. ⭐
Ques: How many different values of X are possible after both P1 and P2 process finish execution
X = 6
P1X = X/2
P2X = X+4Ans -> 2 Processes -> 2^n Possible Sequence: ⭐
- P1->P2->
X=7 - P2->P1->
X=5, - P1 & P2 execute at same time, P1 write later ->
X=3 - P1 & P2 execute at same time, P2 write later ->
X=10
Ques: Minimum and Maximum possible value of X after all the processes
X = 20
P1X = X+3
P2X = X+2
P3X = X-4
P4X = X-6Ans -> By Race condition
Xmin= 10 (P3 & P4)Xmax= 25(P1 & P2)
Critical Section :
=Critical Section=== -> The critical section is a ==code segment where the shared variable can be accessed ⭐
┌──────┐│ ---- ││ ---- ││ ---- │|┌-─-─┐|................||----|| -> Critical Section xxx||----|| -> Synchronisation Required|└─-─-┘|...............│ ---- │└──────┘- Shared Variable -> A way of communication using common memory accessible by multiple processes or threads. ⭐
- Other Communication Ways -> Message Passing, Pipes, Sockets, Semaphores, Signals.
Critical Section -> Shared variable -> Synchronization -> Solution
Requirement of Critical Section Solution ⭐
-
Mutual Exclusion== -> If a Process P1 is executing Critical Section, Other Process P2 can’t enter critical section i.e. ==One process can enter critical section at a time
-
Progress== -> If no process is in Critical Section then process can access it, If it is not allowed due to some reason then there is no progress. ==If Deadlock -> No Progress
-
Bounded Waiting -> If P1 is in critical section and Process P2 is waiting. After Process P1 come out from critical section and again tries to enter it, Stop it and give priority to waiting Process P2 to enter critical section
C.S -> Critical Section R.S -> Remaining Section
A. Software Solutions ⭐
Solution 1 (Lock):
Using Lock Boolean lock = false;
P0 P1While(true) While(true){ { while (lock); while (lock); lock=true; lock=true; // C.S // // C.S // lock = false; lock = false; // R.S // // R.S //} }
- Mutual Exclusion ❌- Progress ❌-> Starvation- Bounded wait ❌ -> Starvationlock=true and while(lock)-> Infinite Loop- In Starting,
lock = false, if Both process executewhile(lock)at same time and so can enters into critical Section -> So there is no Mutual Exclusion - If No process inside Critical Section, Last updated Value of Lock always be
falseand so any process can enter -> There is Progress - If Process P1 enter Critical Section, and made
lock=true, Process P2 can’t enter C.S. After P1 finished its One Cycle, It come out, Makelock=falseand again it can access Critical section and Makelock=trueeven if P2 is waiting. so there is no procedure to prevent P1 from execution if someone is waiting from more time -> No bound and wait So: -
Mutual Exclusion Not Guaranteed: Both processes can read lock false simultaneously and then set lock = true, ==allowing both to enter the Critical Section together.
-
Progress Not Guaranteed: If one process is preempted while in the Critical Section before setting lock = false==, the other process will ==wait indefinitely.
-
Bounded Waiting Not Satisfied: There is no mechanism to ensure bounded waiting, so one process may repeatedly enter the Critical Section while the other starves.
Solution 2(Turn):
Using turn int turn=0
P0 P1While(true) While(true){ { while (turn!=0); while (turn!=1); // C.S // // C.S // turn = 1; turn = 0; // R.S // // R.S //} }
- Mutual Exclusion ✅- Progress ❌ -> Starvation- Bounded wait ✅- Each process give another process turn after its execution -> Strict Alternation Manner -> Mutual Exclusion and Bounded wait is there
- If only P1 process executed,
turn=0in starting, and P1’swhile(turn!=1)->while(turn==0)-> Infinite Loop -> No progress - Starvation ✅is also there, Each Process Required another Process execution to execute it again. So:
- Mutual Exclusion Guaranteed: Only one process can enter the Critical Section since entry depends on the value of
turn. - Progress Not Guaranteed: If one process fails or delays outside its turn, the other process waits indefinitely.
- Bounded Waiting Guaranteed: Each process gets a fair turn alternately; no starvation occurs.
Solution 3 (Flag[] & Turn): Peterson’s Solution ⭐⭐ ??????????? understand
Boolean Flag[2] = {False, False}; int turn;
P0 P1While(true) While(true){ { Flag[0]=true; Flag[1]=true; turn=1; turn=0; while (Flag[1] && turn==1); while (Flag[0] && turn==0;); // C.S // // C.S // Flag[0]=False; Flag[1]=False; // R.S // // R.S //}
- Mutual Exclusion ✅- Progress ✅- Bounded Wait ✅
No Starvation ⭐- Is Process P1 and P2 have there own Flag
Flag[0]andFlag[1], and both false in starting Flag[0]=1-> P0 want to go in C.S,Flag[1]=1=> P1 want to go in C.S-
turn used for priority and will tell who should go in C.S, at a time turn=0 or turn=1. If Process P0 Execute and Complete C.S, it will set turn=1 and tell that next time its’s P1 turn, So on Completing a Process another want to access C.S waiting one will given priority -> Bounded wait
-
In P0 : while(Flag[1] && turn1) -> Flag[1] (P1 want to go in C.S) and turn1 (Priority is P1), the P0 -> Infinite Loop, Can’t Access and vice versa -> Mutual Exclusion
- If another process don’t want to go in C.S its
Flag=falseand so, the current process’swhile(another Process Flag = false && turn)-> Not infinity -> can enter C.S So: - Mutual Exclusion Guaranteed: At most one process can enter the Critical Section because if both want to enter,
turnensures only one proceeds. - Progress Guaranteed: The decision of which process enters the Critical Section depends on
turnandflag, ensuring no deadlock or indefinite blocking. - Bounded Waiting Guaranteed: Each process gets a fair chance to enter its Critical Section after at most one entry by the other process.
B. Hardware Solutions
TestAndSet()Swap()
Solution 1 (TestAndSet()): Returns the current value flag and sets it to true
Boolean Lock = False; boolean TestAndSet(Boolean *trg) { boolean rv = *trg; *trg = True; Return rv; }
P0 P1While(true) While(true){ { while (TestAnsdSet(&Lock)); while(TestAnsdSet(&Lock)); // C.S // // C.S // Lock = false; Lock = false;} }
- Mutual Exclusion ✅- Progress ✅- Bounded wait ❌- In starting
Lock=False, let a Process P1, Inwhile(TestAndSet(&Lock))TestAndSet(&Lock)will returnFalseand setLock=True. It goes into Critical Section and Prevent other process from accessing it. - This is same as S/w solution 1 (
lock) just the difference is that, there is no preemption between checking oflockvariable and setting it totruein S/w solution i.e. iflock=false,lockwill set totrueafterwhile(lock): and iflock=true,while(lock)will be infinite loop
Solution 2 (Swap()):
Boolean Key; // Local Boolean Lock = False; // Global and Shared void Swap(Boolean *a, Boolean *b) { boolean temp = *a; *a=*b; *b=temp; }
P0 P1While(true) While(true){ { Key = True; //key0 Key = True; //key1 while (Key==True); while(Key==True); { { Swap(&Lock, &Key) Swap(&Lock, &Key) } } // C.S // // C.S // Lock = False; Lock = False; // R.S // // R.S //} }
- Mutual Exclusion ✅- Progress ✅- Bounded wait ❌- Each Process P0 and P1 have there own local variable
Key - It Swap() swap the value of
LockandKey -
Let P0 Execute, Initially Lock=False and Key=True, While(keyTrue) -> Swap Lock & Key, Lock=True, Key=False then again While(KeyTrue) -> False -> Critical Section. and other Process can’t access C.S until it complete, due to Lock=true
Synchronization Tools : ⭐
- Semaphore
- Monitor
Semaphore== -> ==Shared Integer value (non-negative)
- Semaphore (S) value which can be accessed using following function only
Wait()/P()/Degrade()Signal()/V()/Upgrade()
⭐wait(S){ Signal(S) while(S<=0); { S--; S++} } Two type of Semaphore / \ Binary Counting 0,1 0, 1, 2, 3,......
Binary:if S=0: if S=1: wait(s) => X Not Successful wait(s) => s=0 (⭐ Set to 0) signal(s) => S=1 signal(s) => S=1 (⭐ Set to 1)
Counting:if S=0: if S>0: wait(s) => X Not Successful wait(s) => s-- (⭐ Decrment by 1) signal(s) => S++ signal(s) => S++ -> (⭐ Increment by 1)Ques: A Binary Semaphore S=1, 10 Processes P1, P2, P3…P10. All process have same code as given below but, one process P10 has signal(S) in place of wait(S) If all processes can execute Multiple times, then maximum no. of processes which can be in critical section together??
while(True){ wait(S) // C.S // Signal(S)}Ans: All 10 Processes can Access Critical section concurrently. Since P10 Doesn’t have wait(S) it would never be restricted to enter C.S Let any one Process P1 enters in C.S initially, and all goes and stay inside C.S, one by one, but P10 Came out and Go in again and again. (S=1)--->(P1=>S=0)--->(P10=>S=1)--->(P2=>S=0)--->(P10=>S=1)---->(P3=>S=0) … ----> P(10=>S=1 (Now it can stay in C.S)) ----> (P9=>S=0)
Classical Synchronisation Problems: ⭐
- Bounded Buffer Problem
- Reader-Writer Problem
- Dining Philosopher Problem
1. Bounded Buffer Problem:
Bounded buffer with capacity N ┌──────────────────────────────┐Multiple Producers => │ [A] [B] [C] [D] [ ] ... [ ] │ | 0 1 2 3 4 N-1 | └──────────────────────────────┘- Known as producer-consumer problem also
- Buffer is the shared resources between producers and consumers
- Solution:
- Producers must block if the buffer is full
- Consumers must block if the buffer is empty
- Variables: Mutex -> Binary Semaphore (Initialize S=1) to take lock on buffer (Mutual Exclusion), Full -> Counting Semaphore (S=0) to denote the number of occupied slots in buffer, Empty -> Counting Semaphore (S=N) to denote the number of empty slots in buffer
Producer Consumer--------- -----------wait(empty) wait(full)wait(mutex) wait(mutex)// buffer // buffersignal(mutex) signal(mutex)signal(full) signal(empty)
Note: In Producer : if `wait(empty) & wait(mutex)` are swapped there can be deadlock, similarly in Consumer if `wait(full) & wait(mutex)` are swapped there may be deadlock2. Reader-Writer Problem
- If writer is accessing the file, then all other readers and writers will be blocked
- If any reader is reading, then other readers can read but writer will be blocked
- Variables: Mutex -> Binary Semaphore (Initialize S=1) to provide Mutual Exclusion, Wrt ->Binary Semaphore (S=1) to restrict readers and writers if writing is going on, ReadCount -> Integer variable( x=0 ), denotes number of active readers
Writer Reader--------- -----------wait(wrt) wait(mutex)// write ReadCount++signal(wrt) if(ReadCount==1): wait(wrt) // Stuck writer or itself signal(mutex) // reading wait(mutex) ReadCount-- if(ReadCount==0) Signal(wrt) // allow writer
Note: ReadCount is not taken as Semaphore, because we required Comparison operators, which are not allowed on Semaphore. Semaphore can only be accessed by wait() and signal3. Dining Philosopher Problem
- 3 Professor(processor)
- 3 Chopsticks (Resources)
- Each one have 1 Chopstick
/ - But to eat food each required
/ \two chopsticks
P1 😋 \ 🍚 /
😋🍚 | 🍚 😋 P3 P2- Variable: Array of binary Semaphores
ch[k]= {1, 1, 1 }
Pi----wait(ch[i])wait(ch[(i+1)modk])// eatingsignal(ch[i])signal(ch[(i+1)modk)])- Problem : Deadlock -> If each professors allocate one chopstick
ch={0, 0, 0} - Solutions:
- There should be at most (k-1) philosophers on the table
- A philosopher should only be allowed to pick their chopstick if both are available at the same time
- One philosopher should pick the left chopstick first and then right chopstick next; While all others will pick the right one first then left one
Deadlock
Section titled “Deadlock”Starvation -> Indefinite wait Deadlock -> Infinite wait
Three Operation on Resources:
- Request
- Use
- Release
Deadlock -> If two or more processes are waiting for such an event which is never going to occur ex:
| Hold | Wait--------------|---------P1 | Keyboard | Printerp2 | Printer | KeyboardFour Necessary Conditions for Deadlock:
- Deadlock can occur only when all the following conditions are satisfied:
- Mutual Exclusion
- Hold & Wait
- No-Preemption (of Resources)
- Circular Wait
Resource Allocation Graph (RAG):
- Vertex -> Process
(P)and Resource[ ] R - Edge -> Allocation
P<-[ ], RequestP->[ ]
R1 [● ●] ⬉ ⬋ ⬊ ╲(P1) (P2) (P3) ↑ │ ⬈ │ ╱ │ ↓ ╱ ↓ ⬋[● ● ●] [● ●] R2 R3Recovery From Deadlock
- Make sure that deadlock never occur -> Prevent the system from deadlock or avoid deadlock
- Allow deadlock, detect and recover
- Pretend that there is no any deadlock (used modern day OS)
Deadlock Handling Techniques: 4. Deadlock Prevention==: Ensure ==at least one of the necessary conditions note meet==. 5. ==Deadlock Avoidance:== Dynamically ==check resource allocation to avoid unsafe states== (e.g., Banker’s Algorithm). 6. ==Deadlock Detection & Recovery:== Allow deadlock to occur, ==detect it, and recover== by terminating or preempting processes. 7. ==Deadlock Ignore:== ==Do nothing (used in systems like UNIX where deadlocks are rare).
A. Deadlock Prevention->Prevent any 4 necessary condition:
- Mutual Exclusion
- Make Each Process Independent such that no sharing or resources required? -> ❌ Not Possible
- or Made CPU with Intensive no. of resources so that each process can have there own? -> ❌ Costly
- Hold and Wait
- Either Hold all Resources or Wait, Don’t Hold Partially required Resources? -> ❌ But process Need to Required a long time, Bad CPU utilization
- No Preemption
- Preempt a Process causing Deadlock? -> If a Process execution stuck in a case if resources is taken away from it, it may goes to a unstable state ❌
-
Circular Wait
- Allow process to Request for Resource in a specific order, let Increasing Order. So if a Process P1 required Resources R1, R2 and R3. Allow P1 to Request for R3 if it Have R2 already, or Request R2 if it hold R1 already -> While Holding a resources of lower no. you can request for higher no. resources ✅
B. Deadlock Avoidance -> OS tries to keep system in safe state
Unsafe State ╱ ╱ [Deadlock] ╱ Safe State ╱- Banker’s Algorithm
| Processes | Alloc. | Max | Available || | A B C |A B C| A B C ||-----------|--------|-----|-----------|| --- | --- | --- | --- || --- | --- | --- | --- || --- | --- | --- | --- |
Need = Max-Alloc
Deadlock Avoidance Algo:- Pi Request for requestᵢ
1. requestᵢ ≤ needᵢ2. requestᵢ ≤ availableᵢ3. allocationᵢ = allocationᵢ + requestᵢ needᵢ = needᵢ - requestᵢ available - available - requestᵢ4. Run safety algo: if safe -> requestᵢ granted if unsafe -> requestᵢ rejectedC. Deadlock Detection
- When all resources have single instance -> Wait for Graph
- When resources have multiple instances -> Deadlock Detection Algorithm
- Wait For Graph (WFG)
- It is created from resource allocation graph
RAG: (P5) ⬆[]R1 []R3 []R4 ⬆ ⬊ ⬆ ⬈ ⬇(P1) (P2) (P3) ⬆ ⬊ ⬇[] ⬅ (P4) ⬅ []R2 R2------------------------------WFG: (P5) ⬆(P1) ➡ (P2) ➡ (P3) ⬉ ⬇ ⬋ (P4)
If Cycle? Yes -> Deadlock- Deadlock Detection Algorithm:
- When resources have multiple instance, Deadlock detection is done using a specific algorithm
- Similar to Banker’s Algorithm. In Banker’s Algorithm we are given
MaxandAlloc, but in this algorithmRequestandAllocare there so no need to calculateneed(request)
| Processes | Alloc. | Request | Available || | A B C | A B C | A B C ||-----------|--------|---------|-----------|| --- | --- | --- | --- || --- | --- | --- | --- || --- | --- | --- | --- |Resource Requirement Formula ⭐
Formula 1. (for different resource requirement per process)
- Processes require resources:
R1, R2, R3, ... , Rn - Max resources such that deadlock can occur =
(R1−1) + (R2−1) + ... + (Rn−1) -
Min resources such that deadlock never occur =
= (R1−1) + (R2−1) + ... + (Rn−1) + 1Formula 2. (for equal resource requirement per process)
- Each of the
nprocesses requiresRresources - Max resources such that deadlock can occur =
n(R−1) - Min resources such that deadlock never occur =
= n(R−1) + 1Ques: Three Processes P1, P2, and P3. the process require Resources 5, 6 and 7 respectively to execute. The minimum no. of resources the system should have such that deadlock can never occur? Ans:
Max no. of Resources such that deadlock occur:
P1 = 5-1 = 4P2 = 6-1 = 5P3 = 7-1 = 6
=> 4 + 5 + 6 = 15
Min no. of Resources such that deadlock never occur:
=> 15 + 1 = 16Ques: Consider a system with n processes. All n processes require R resources each to execute. The minimum number of resources the system should have such that deadlock can never occur? ⭐
Max no. of Resources such that deadlock occur:
P1 + P2 + ... + Pn
=> (R-1) + (R-1) +...+ (R-1) n times = n(R-1)
Min no. of Resources such that deadlock never occur:
=> n(R-1)+1D. Recovery From Deadlock
There are three basic approaches to recover from deadlock 4. Inform the system operator and allow him/her to take manual intervention 5. Terminate one or more processes involved in the deadlock 6. Preempt resources.
- Process Termination
- Terminate all processes involved in the deadlock
- Terminate processes one by one until the deadlock is broken
Memory Management
Section titled “Memory Management”- Memory Management -> Module of OS
- Functions:
- Memory allocation
- Memory deallocation
- Memory protection
- Goals:
- Maximum Utilization of space
- Ability to run larger program with limited space
Memory Management Techniques / \ Contiguous Non-contiguous / \ / \ fixed partition variable partition Paging SegmentationContiguous Memory Allocation
Contiguous Memory Allocation → The whole process is stored in a single continuous block of memory.
-
Fixed Partitioning== → The ==memory is divided into a fixed number of partitions at system startup. When a process arrives, it is allocated to a partition based on the partition allocation method (e.g., first-fit, best-fit).
-
Problem: Internal Fragmentation → Wasted space inside partitions if the process is smaller than the partition size.
-
-
Variable Partitioning== → No fixed partitions; ==memory is allocated dynamically as per the process’s need. When a process arrives, a partition of exact size is created.
-
Problem: External Fragmentation → Free memory exists, but it is scattered in non-contiguous blocks, making it unusable for larger processes.
- Solution: Compaction -> Rearranges memory contents to place all free memory together in one large block
-
Non-Contiguous Memory Allocation
Non-Contiguous Memory Allocation → The process is divided into multiple parts and stored in different locations in memory. This allows better memory utilization but requires mechanisms like paging or segmentation for access.
-
Paging== -> The ==process is divided into fixed-size pages==, and ==memory is divided into frames of the same size.
- A page table maps each page to a frame in physical memory.
- Eliminates external fragmentation but may cause internal fragmentation if the last page is not fully used.
-
Segmentation== -> The ==process is divided into logical segments== (e.g., code, stack, heap), each of ==varying size.
-
A segment table keeps track of each segment’s base address and limit. ⭐
- Eliminates internal fragmentation but may cause external fragmentation (though less than variable partitioning).
Paging: ⭐
Formulas and Numerical important for GATE
-
Processes is divided in equal size of partitions called as pages
-
Physical memory is divided in same equal size of partitions called frames
- Processor will have a view of process and its pages
- Pages are scattered in frames
-
Page table is used to map a process page to a physical frame
Example of Paging:
Physical Memory (Main Memory) ┌─────────┐ Process Page table │ │ frame 0┌────────┐ ┌─────┐ ├─────────┤│ Page 0 │ --> 0 │ 3 │ \ ⬈│ page 3 │ frame 1├────────┤ ├─────┤ \ / ├─────────┤│ Page 1 │ --> 1 │ 6 │ \ \/ │ │ frame 2├────────┤ ├─────┤ |/\ ├─────────┤│ Page 2 │ --> 2 │ 5 │ \| ↘ │ page 0 │ frame 3├────────┤ ├─────┤ /\ ├─────────┤│ Page 3 │ --> 3 | 1 | / |\ │ │ frame 4└────────┘ └─────┘ | \ ├─────────┤ | ↘ | page 2 | frame 5 \ ├─────────┤ ↘ | page 1 | frame 6 ├─────────┤ | | frame 7 └─────────┘Page Table Formulas: ⭐
No. of entries in P.T. ---> No. of pages1 P.T entry size ---> (frame no. bits) + extra bits (if any)P.T. size ---> (no. of entries in P.T) * (1 P.T. entry size) ---> (No. of pages) * (frame no. bits) ⭐ ---> (No. of pages) * log2(No. of frames)Page-Frame Formula
bits to represent page no ---> log2(No. of pages)bits to represent frame no ---> log2(No. of frames)
1 frame size <---> 1 page sizeBinary Representations of Page and Frames:
LetNo. of pages = 4,No. of frames = 8,Page size = 2 byte
Physical Memory ┌─────────┐ Process Page table │ │ frame 000┌─────────┐ ┌─────┐ ├─────────┤│ Page 00 │ 0 │ 3 │ │ page 3 │ frame 001├─────────┤ ├─────┤ ├─────────┤│ Page 01 │ 1 │ 6 │ │ │ frame 010├─────────┤ ├─────┤ ├─────────┤│ Page 10 │ 2 │ 5 │ │ page 0 │ frame 011├─────────┤ ├─────┤ ├─────────┤│ Page 11 │ 3 | 1 | │ │ frame 100└─────────┘ └─────┘ ├─────────┤ | page 2 | frame 101 ├─────────┤ | page 1 | frame 110 ├─────────┤ | | frame 111 └─────────┘then,- bits to represent page no. = log(4) = 2- bits to represent frame no. = log(8) = 3- total process size = (no. of pages) x (page size) = 4 x 2 byte = 8 byte- total Main Memory size = (no. of frames) x (frame size) = 8 x 2 byte = 16 byteLogical Address and Physical Address For: ⭐
1 page ( or frame ) = 2 byte1 address = 1 byte (Byte Addressable Memory) ⭐
Let, Physical Memory Size = 16 byte-> No. of Address in Physical Memory = 16 address-> No. of bits in Physical Address = log(16) = 4 bits
Let, Process = 8 byte-> No. of Address in Logical Memory = 8 address-> No. of bits in Logical Address = log(8) = 3 bits
Page & Frame (These are ) ⭐-> No. of pages = 8 byte / 2 byte = 4 pages-> No. of frame bits = log2(4) = 2 bits-> No. of frames = 16 byte /2 byte = 8 frames-> No. of frame bits = log2(8) = 3 bits
Physical Physical Memory Address ┌─────────┐ 0000 | | Logical Process Page table 0001 │ │ frame 000 Address ┌─────────┐ ┌─────┐ ├─────────┤ 000 │ Page 00 │ 0 │ 3 │ 0010 │ page 3 │ frame 001 001 | | | | 0011 | | ├─────────┤ ├─────┤ ├─────────┤ 010 │ Page 01 │ 1 │ 6 │ 0100 │ │ frame 010 011 | | | | 0101 | | ├─────────┤ ├─────┤ ├─────────┤ 100 │ Page 10 │ 2 │ 5 │ 0110 │ page 0 │ frame 011 101 | | | | 0111 | | ├─────────┤ ├─────┤ ├─────────┤ 110 │ Page 11 │ 3 | 1 | 1000 │ │ frame 100 111 | | | | 1001 | | └─────────┘ └─────┘ ├─────────┤ 1010 | page 2 | frame 101 1011 | | ├─────────┤ 1100 | page 1 | frame 110 1101 | | ├─────────┤ 1110 | | frame 111 1111 | | └─────────┘Logical Address to Physical Address
Logical Address┌─────┬─────┐│ p │ d | p:page no. , d: offset└─────┴─────┘ | |Physical Address | equal┌─────┬─────┐ ↓│ f │ d | f:frame no. , d: offset└─────┴─────┘
P.T 1 entry┌─────┬─────────────┐│ f │ extra bits |└─────┴─────────────┘Note: Main Memory is Byte Addressable, i.e. each address can store 1 byte
Formula: ⭐⭐⭐
- No. of bits in offset =
log(page size)=d -
No. of bits in page no. = log(no. of pages) = p
-
No. of bits in frame no. = log(no. of frames) f
- No. of bits in Logical Address =
log(process size)=p+d - No. of bits in Physical Address =
log(memory size)=f+d
Reverse Formula:
- Page size =
2^(no. of bits in offset)=2^d - No. of pages =
2^(No. of bits in page no.)=2^p - No. of frames =
2^(No. of bits in frame no.)=2^f - Process Size =
Page size x No. of pages=2^(No. of bits in Logical address)=2^(p+d) - Physical Memory size =
frame size x No. of frames=2^(No. of bits in Physical Address)=2^(f+d)
Logical Address ---------2^(bits)---> = Process size ┌─────┬─────┐ page no.│ p │ d | offset | └─────┴─────┘ | | | 2^(bits) 2^(bits) ↓ ↓ = No. of pages = Page size ↕ = No. of frames = frame Size ↑ ↑ 2^(bits) 2^(bits) | | | ┌─────┬─────┐ | frame no.│ f │ d | offset └─────┴─────┘ Physical Address --------2^(bits)---> = Memory size
Page Table Size = No. of Pages x (f + extra bits) ┌─────┬─────────────┐ -┐ │ f │ extra bits | | ├─────┼─────────────┤ | │ f │ extra bits | | No. of entries ├─────┼─────────────┤ | = No. of Pages │ f │ extra bits | | ├─────┼─────────────┤ | │ f │ extra bits | | ├─────┼─────────────┤ | │ f │ extra bits | | └─────┴─────────────┘ -┘Example
IfPage size = 2 byte (2 Address)No. of page = 4process size?
Then-> no. of bits in offset(d) -> log(2) = 1 bit-> no. of bits in page no.(p) -> log(4) = 2 bits-> no. of bits in Logical address = 1+2 = 3 bits-> Process size = 2^3 = 8 byte Logical Physical address address ┌────────┐ ┌───────────────┐ | |┌─────┐ | ↓ |Physical|| CPU | ⭢ [p|d] [f|d] ⭢ |Address |└─────┘ | ↑ | | | p ┌┌─────┐ | | | | └| | | └────────┘ └───>|-----|────┘ |__f__| | | └─────┘ Page TableQues: Consider a paged memory system with logical address of 35-bits and physical address of 29-bits. The page size 2KB. Further consider that one page table entry size is 4 bytes.
Find the following:
A. Bits in page offsetB. Bits for page numberC. No. of pages in processD. Bits for frame numberE. Number of frames in physical memoryF. Page table sizeG. Extra bit per entryAns:
Logical Address[ p | d ] = 35 bits 11
Physical Address[ p | d ] = 29 bits 11
A.page size = 2KB = 2^11 Bno. of bits in offset= log(2^11) = 11 bits
B.no. of bits in page no. = 35 - 11 = 24 bits
C.no. of pages in process = 2^24
For verification only:Process Size = page size x no. of pages = (2^11)*(2^24) = 2^35no. of bits in Logical address = log(2^35) = 35 ✅
D.no. of bits in frame no. = 29-11 = 18 bits
E.No. of frames in Physical Address = 2^18
F.Page Table Size = (no. of entries) x (1 entry size) = (no. of pages) x 4 byte = 2^24 x 2^2 = 2^26 B = 64MB
G.no. of bits in Entry = 4B = 32 bitsno. of bits in frame = 18 bitsno. of extra bits = 32-18 = 14 bitsQues: Consider a machine with 64 MB physical memory and a 32-bit virtual address space. If page size is 4 KB, what is the approximate size of the page table.
Ans:
Page Size = 4 KB = 2^12 BNo. of Pages = 2^12No. of bits in offset = log(2^12) = 12 bits
virtual address[ p | d ] = 32 bits 20 12
Physical Memory = 64 MB = 2^26 BNo. of Bits in Physical Address = log(2^26) = 26 bitsNo. of Bits in Frame no. = 28-d = 28-20 = 6 bits
Physical address[ f | d ] = 26 bits 14 12
-> Size of Page table = (No. of entries) x (size of one entry) = (No. of pages) x (No. of bits in frame no.) = 2^12 x 8 = 2^20 x 14 = 14 Mb= 14 Mb = 14/8 ~ 2 MB
Note: if extra bit not given, take it zeroSummary
Logical Address ---------2^(bits)---> = Process size ┌─────┬─────┐ page no.│ p │ d | offset | └─────┴─────┘ | | | 2^(bits) 2^(bits) ↓ ↓ = No. of pages = Page size ↕ = No. of frames = frame Size ↑ ↑ 2^(bits) 2^(bits) | | | ┌─────┬─────┐ | frame no.│ f │ d | offset └─────┴─────┘ Physical Address --------2^(bits)---> = Memory size
Page Table Size = No. of Pages x (f + extra bits) ┌─────┬─────────────┐ -┐ │ f │ extra bits | | ├─────┼─────────────┤ | │ f │ extra bits | | No. of entries ├─────┼─────────────┤ | = No. of Pages │ f │ extra bits | | ├─────┼─────────────┤ | │ f │ extra bits | | ├─────┼─────────────┤ | │ f │ extra bits | | └─────┴─────────────┘ -┘
Formula: ⭐- No. of bits in offset = log(page size)- No. of bits in page no. = log(no. of pages)- No. of bits in Logical Address = log(process size) = p+d- No. of bits in Physical Address = log(memory size)
Reverse Formula:- Page size = 2^(no. of bits in offset)- No. of pages = 2^(No. of bits in page no.)- Process Size = 2^(No. of bits in Logical address) = Page size x No. of pages- Physical Memory size = 2^(No. fo bits in Physical Address)Time Required for Paging
Tₘₘ -> Memory Access Time
Effective Memory Access Time: If Page Table Stored in Main Memory: E.M.A.T = 2*Tₘₘ ⭐
If Page Table Stored in Registers (Special case) E.M.A.T = Tₘₘ
why 2*Tₘₘ?one `Tₘₘ` for accessing Page table (to get physical address) from Memory and another `Tₘₘ` to get the actual content from the MemoryTLB
Formulas & Numerical Important for GATE
Translation lookaside buffer (TLB) -> Memory hardware that is used to reduce the time taken to access a user memory location or address translation
TLB[p] -> f
p f┌──┬──┐├──┼──┤├──┼──┤├──┼──┤├──┼──┤└──┴──┘CPU --->[ p | d ] ┌─────────────────┐ | | | | | | T.L.B | Physical Memory | | ├─> ┌─┬─┐ TLB HIT | (Main Memory) | | ├─> ├─┼─┤ ────────────> [ f | d ] ───> | | | ├─> ├─┼─┤ ^ | | | └─> └─┴─┘ | | | | TLB MISS ┌─────┐ | | | └─────────────>├─────┤──────┘ └─────────────────┘ ├─────┤ └─────┘ Page Table (Main Memory)How T.L.B is different from P.T.
| TLB | Page Table |
|---|---|
| Small, fast cache in hardware | Large data structure in main memory |
| Stores ==only recently used page==–frame mappings | Stores all page–frame mappings |
| Very fast access (near CPU speed) | Slower access (needs memory access) |
| Hit gives frame number directly | ==Requires lookup and may need extra memory access== |
| Reduces address translation time | ==Performs full translation== without speed optimization |
Effective Memory Access Time (with TLB)
E.M.A.T = H*(tₗ + tₘ) + (1-H)(tₗ + 2*tₘ) = tₗ + tₘ + (1-H)tₘ
tₗ - TLB access timetₘ - Memory access timeH - Hit RatioTLB Mapping
- Direct Mapping
- Set Associative
- Full Associative
1. Direct Mapping
Logical Address┌────────────────────┬───┐| p | d |└────────────────────┴───┘ ⬇┌─────┬──────────────┬───┐| Tag | TLB entry no.| d |└─────┴──────────────┴───┘
No. of bits in TLB entry no. = log(No. of entries in TLB)2. Set Associative Mapping
Logical Address┌────────────────────┬───┐| p | d |└────────────────────┴───┘ ⬇┌─────┬──────────────┬───┐| Tag | Set offset | d |└─────┴──────────────┴───┘
No. of bits in set offset = log(No of sets in TLB)No. of sets in TLB = (no. of entries in TLB ) / (Associativity)3. Full Associative Mapping
Logical Address┌────────────────────┬───┐| p | d |└────────────────────┴───┘ ⬇┌────────────────────┬───┐| Tag | d |└────────────────────┴───┘Ques: A computer system implements a 40 bit virtual address, page size of 8 kilobytes and a 128-entry translation look-aside buffer (TLB) organized into 32 sets each having four ways. Assume that the TLB tags does not store any process id. The minimum length of the TLB tag in bits is ?
Ans 22 bits
page size = 8 kB = 2^13 bitsd = log(2^13) = 13
[ p | d ] = 40 bits 27 13
No. of Sets = 128/4 = 32no. of bits in Tag = log(32) = 5
[ Tag | Set | d ] = 40 bits 22 5 13
Tag = 22 bitsQues: A computer system implements a 36 bit virtual address, page size of 16 kilobytes and a 256-entry. TLB organized into 64 sets each having 4-ways. Assume that the TLB tags does not store any process id. The minimum length of the TLB tag in bits is ?
Ans 16 bits
page size = 16 kB = 2^14 bitsd = log(2^14) = 14
[ p | d ] = 36 bits 22 14
No. of Sets = 256/4 = 64no. of bits in Tag = log(64) = 6
[ Tag | Set | d ] = 36 bits 16 6 14
Tag = 16 bitsPage Table in Memory
Physical Memory┌───────────┐├───────────┤| Page Table|├───────────┤│ ││ │└───────────┘Multilevel Paging ⭐
- Instead of using a single page table, the system divides it into multiple levels, reducing memory overhead and improving lookup efficiency.
- So we Require another page Tables to get Inner Page Tables
Outer Page Table Inner Page Table┌───────────┐ ┌───────┐--> f│ ---------------> └───────┘-->│ │ ┌───────┐-->│ ---------------> └───────┘-->│ │ ┌───────┐-->│ ---------------> └───────┘-->└───────────┘Logical Address┌─────┬─────┬───┐| p1 | P2 | d |└─────┴─────┴───┘ | | ┌─────┐ [ f | d ] └──> p₁{├─────┤ / ├─────┤---> ┌─────┐ / │ │ p₂{├─────┤ / └─────┘ ├──f──┤---> ┌─────┐ │ │ d{├─────┤ └─────┘ ├/////┤ │ │ └─────┘
No. of P.T entries per page = (Page Size) / (Entry Size)Note: ⭐ P -> max no. of bits for Page no. If outer table fits into a page exactly, then it will use all bits from max no. of bits possible.
- No. of bits in
Inner page no. = P - No. of bits in
Outer page no. ≤ P
Ques: V.A = 15 bits, Page Size = 128 bytes, Page table entry size = 2 bytes. Number of levels in multilevel page?
no. of bits in offset = log(page size) = log(128) = 7 15 bits
[ p | d ] = 15 bits 8 7
No of Entries = Page size/ page table entry size = 2^7/2 = 2^6
No. of bits in p2 = log(2^6) = 6
[ p1 | p2 | d ] = 15 bits 2 6 7
No. of levels = 2
Repeat the step and break down outer page bits (pᵢ) until pᵢ≤Pⱼ≤d (where j>i)Ques: Consider a virtual memory system with physical memory of 8GB, a page size of 8KB and 46-bit virtual address. Assume every page table exactly fits into a single page. If page table entry size if 4B then how many levels of page tables would be required.
Page size = 8KB -> 2^13Entry size = 4B -> 2^2no. of bits in d = log(page size) = 13 bits
No. of entries per page = (Page size) / (Entry size) = 2^11no. of bits for each page = 11 bits
[ p1 | p2 | p3 | d ] = 46 bit 11 11 11 13Ques: VA = 26 bits, Page table entry size = 4 bytes, 2-level paging, Outer page table fits into a page exactly. Page size in Kbytes? Ans: 1 KB
Page size = 2^x BNo. of bits in d = x
No. of entries per page = 2^xB/4B = 2^(x-2)Max No. of bits per page = x-2
[ p1 | p2 | d ] = 26 bits x-2 x-2 x
3x - 4 = 26 => x = 30/3 = 10
Page Size = 2^10 = 1KBQues Size of page tables across all levels? [ p1 | p2 | d ] 1 7 10 Ans:
D = 10 bitsPage Size = 2^10 B = 1KB
No. of Page to store outer P.T = 1 PageNo. of Page to store Inner P.T = 2^(p1) = 2^1 = 2 PagesTotal No of page = 1 + 2 = 3 PagesTotal Page Table size = 3 Pages *1KB = 3KBQues Size of page tables across all levels? [ p1 | p2 | p3 | d ] 1 9 9 11 Ans:
D = 11 bitsPage Size = 2^11 B = 2KB
No. of Page to store outer P.T = 1 PageNo. of Page to store Middle P.T = 2^(p1) = 2^1 = 2 PageNo. of Page to store Inner P.T = 2^(p1)*2^(p2) = 2^1*2^9 = 2^10 = 1024 PagesTotal No of page = 1 + 2 + 1024 = 1027 PagesTotal Page Table size = 1027 Pages * 2KB = 2054 KBSegmentation
- Divide Process in Logically related partitions (Segments)
-
Segments are scattered in physical memory
Virtual Memory ⭐
- Feature of OS
-
Enables to run larger process with smaller available memory
vb -> valid bit, is the process in Main memory?pro. -> process id in Main Memory
process table ┌───┬───┐0 | | 0 | ├───┼───┤1 | 0 | 1 | Physical Memory (M.M) ├───┼───┤ ┌─────────┐ Secondary Memory2 | 3 | 1 | 0 | Page 1 | ┌─────────┐ ├───┼───┤ ├─────────┤ | [0] [3] |3 | | 0 | 1 | Page 6 | | [5] [7] | ├───┼───┤ ├─────────┤ | |4 | 2 | 1 | 2 | Page 4 | └─────────┘ ├───┼───┤ ├─────────┤ |5 | | 0 | 3 | Page 2 | Store Remaining Pages ├───┼───┤ └─────────┘6 | 2 | 1 | └───┴───┘ / \process id Valid bit(in M.M) (is in M.M?)Detailed Virtual Memory Notes
-
Virtual memory is a memory management technique== in operating systems that ==extends usable memory beyond physical RAM by using secondary storage (disk).
-
It gives each process an illusion of having its own large, continuous address space even if RAM is smaller.
-
Virtual memory maps logical addresses to physical addresses using paging==. Only the ==currently needed pages are loaded into RAM, remaining pages stay on disk (page file).
- Purpose
-
Run programs larger than available physical memory
-
Increase multiprogramming by allowing more processes in memory
-
Reduce internal and external fragmentation
- Improve memory utilization without requiring huge physical RAM
-
- How it works
- Program generates logical addresses
- Page table converts logical pages to physical frames
- If a required page is not in RAM → page fault → OS loads it from disk
- Least-used pages may be swapped out to disk to free space
-
Key idea : RAM + disk space together behave like a large continuous memory for each process.
- Virtual memory can be implemented using
- Paging
- Segmentation
- Normal Paging vs Virtual Memory with Paging
-
Paging without virtual memory: entire process pages are loaded into main memory before execution.
-
Virtual memory using paging: only required pages are loaded into main memory; the rest remain on disk.
-
Demand Paging:
- Bring pages in memory when CPU demands
- Pure Demand Paging -> Don’t store any pages in Main Memory in starting of process.
Page Fault -> If the page demanding by CPU is not present in main memory.
- Then we Replace page required pages from Secondary memory to Main memory using Page replacement policy
Page Replacement policy:
- First in First Out (FIFO)
- Optimal Policy ⭐
- Counting Algorithms : Least Recently Used (LRU) and Least Frequently Used (LFU)
- Most Frequently Used (MFU)
- Last In First Out (LIFO)
1. First In First Out (FIFO)
- Replace Page Which entered Earliest in Main Memory Assume:
- No. of frames = 3 (all empty Initially)
- Page reference sequence: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
M.M[ , , ]
➡1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5[1, , ] Page Fault
1,➡2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5[1, 2, ] Page Fault
1, 2, ➡3, 4, 1, 2, 5, 1, 2, 3, 4, 5[1, 2, 3] Page Fault
1, 2, 3, ➡4, 1, 2, 5, 1, 2, 3, 4, 5[4, 2, 3] Page Fault (FIFO:1)
1, 2, 3, 4, ➡1, 2, 5, 1, 2, 3, 4, 5[4, 1, 3] Page Fault (FIFO:2)
1, 2, 3, 4, 1, ➡2, 5, 1, 2, 3, 4, 5[4, 1, 2] Page Fault (FIFO:3)
1, 2, 3, 4, 1, 2, ➡5, 1, 2, 3, 4, 5[5, 1, 2] Page Fault (FIFO:4)
1, 2, 3, 4, 1, 2, 5, ➡1, 2, 3, 4, 5[5, 1, 2] No Page Fault X
1, 2, 3, 4, 1, 2, 5, 1, ➡2, 3, 4, 5[5, 1, 2] No Page Fault X
1, 2, 3, 4, 1, 2, 5, 1, 2, ➡3, 4, 5[5, 3, 2] Page Fault (FIFO:1) ⭐ Note: `Earliest In = 1` is considered, `Not Earliest Demanded = 5`
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, ➡4, 5[5, 3, 4] Page Fault (FIFO:2)
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, ➡5[5, 3, 4] No Page Fault X
No. of Page References = 12No. of Page Fault = 9Hit Ratio = 9/12-
Belady’s Anomaly== ==(only in FIFO)-> Some Page references sequences are such that, Increasing the No. of Frames (No. of Page stored in Memory) Increases the no. of Fault
- Advantage of FIFO
- Simple and easy to implement
- Low overhead
- Disadvantage FIFO:
- Poor performance
- Doesn’t consider the frequency of use or last used time, simply replaces the old page.
- Suffers from Belady’s Anomaly
2. Optimal Policy
- Replace page which will not be used for longest time
- Assume:
- No. of frames = 3 (all empty Initially)
- Page reference sequence: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
M.M[ , , ]
➡1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5[1, , ] Page Fault
1,➡2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5[1, 2, ] Page Fault
1, 2, ➡3, 4, 1, 2, 5, 1, 2, 3, 4, 5[1, 2, 3] Page Fault
1, 2, 3, ➡4, 1, 2, 5, 1, 2, 3, 4, 5[1, 2, 4] Page Fault (required after longest time: 3)
1, 2, 3, 4, ➡1, 2, 5, 1, 2, 3, 4, 5[1, 2, 4] No Page Fault X
1, 2, 3, 4, 1, ➡2, 5, 1, 2, 3, 4, 5[1, 2, 4] No Page Fault X
1, 2, 3, 4, 1, 2, ➡5, 1, 2, 3, 4, 5[1, 2, 5] Page Fault (required after longest time: 4)
1, 2, 3, 4, 1, 2, 5, ➡1, 2, 3, 4, 5[1, 2, 5] No Page Fault X
1, 2, 3, 4, 1, 2, 5, 1, ➡2, 3, 4, 5[1, 2, 5] No Page Fault X
1, 2, 3, 4, 1, 2, 5, 1, 2, ➡3, 4, 5[3, 2, 5] Page Fault (required after longest time: 1 or 2 + FIFO(1,2)=1)
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, ➡4, 5[3, 4, 5] Page Fault (required after longest time: 2)
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, ➡5[5, 3, 4] No Page Fault X
No. of Page References = 12No. of Page Fault = 7Hit Ration = 7/12- Advantages of Optimal Policy:
- Easy to Implement
- Simple data structure are used
- Highest efficient (provide minimum page fault)
- Disadvantages of Optimal Policy
- Requires future knowledge of the program -> Practically Impossible
- Time Consuming
3. Least Recently Used (LRU)
- Replace Page which is used/demanded Earliest
M.M[ , , ]
➡1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5[1, , ] Page Fault
1,➡2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5[1, 2, ] Page Fault
1, 2, ➡3, 4, 1, 2, 5, 1, 2, 3, 4, 5[1, 2, 3] Page Fault
1, 2, 3, ➡4, 1, 2, 5, 1, 2, 3, 4, 5[4, 2, 3] Page Fault (LRU:1)
1, 2, 3, 4, ➡1, 2, 5, 1, 2, 3, 4, 5[4, 1, 3] Page Fault (LRU:2)
1, 2, 3, 4, 1, ➡2, 5, 1, 2, 3, 4, 5[4, 1, 2] Page Fault (LRU:3)
1, 2, 3, 4, 1, 2, ➡5, 1, 2, 3, 4, 5[5, 1, 2] Page Fault (LRU:4)
1, 2, 3, 4, 1, 2, 5, ➡1, 2, 3, 4, 5[5, 1, 2] No Page Fault X
1, 2, 3, 4, 1, 2, 5, 1, ➡2, 3, 4, 5[5, 1, 2] No Page Fault X
1, 2, 3, 4, 1, 2, 5, 1, 2, ➡3, 4, 5[3, 1, 2] Page Fault (LRU:1) ⭐ Note: Here is the difference Between FIFO & LRU
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, ➡4, 5[3, 4, 2] Page Fault (LRU:1)
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, ➡5[3, 4, 5] Page Fault (LRU:2)
No. of Page References = 12No. of Page Fault = 10Hit Ration = 10/12- Advantages of LRU:
- Efficient
- Doesn’t suffer from Belady’s Anomaly
- Disadvantages of LRU
- Complex Implementation
- Expensive
- Requires hardware support
3. Counting Algorithms
- Counting algorithms look at the number of occurrences of particular pages and use this as the criterion for replacements.
- LFU (Least Frequently Used)
- MFU (Most Frequently Used)
4. Last In First Out (LIFO)
- Replace page which entered in Last
- Very Bad Algorithm -> Usually Last In process are Replaced, and the starting Processes are Remained for Long time without any replacement
Frame Allocation
- How many frames do we allocate per process?
- If it is a single-user, single-tasking system, it’s simple - all the frames belong to the user’s process
- Equal Allocation
- Proportional Allocation
New EMAT Formula (with Page Fault)
Section titled “New EMAT Formula (with Page Fault)”Effective Memory Access Time Formula (with Page Fault) ⭐
E.M.A.T = H*(tₗ + tₘ) + (1-H)(tₗ + tₘ + tnew) ↓tnew = (1-p)*tₘ + p*[(1-d)*(PFST with not dirty + d*(PFST with dirty]
tₗ - TLB access timetₘ - Memory access timetnew - average page fault service time.H - Hit Ratiop - page fault rated - dirty page fractionPFST - Page Fault Service Timetnew = (1 − p)*tₘ + p * [(1 − d) * PFST(not dirty) + d * PFST(dirty)] • (1 − p)*tₘ → case when no replacement is needed, simple memory access
• p * (...) → case when replacement happens
• (1 − d) * PFST(not dirty) → page swapped out is clean
• d * PFST(dirty) → page swapped out is dirty and must be written back before loading the new page
If Page Fault (p) / \dirty page (d) not dirty page(1-d)- Dirty Page: A memory page that has been modified in RAM but not yet written back to disk.
- Not Dirty Page (Clean Page): A page that is either unmodified or already saved to disk.
Page Fault Service Time
PFST for Not Dirty Page = Page transfer time from disk to Main memory + (tₘ)
PFST for Dirty Page = Page transfer time from Main memory to disk + Page transfer time from disk to Main Memory + (tₘ)Logical Address in Decimal to Physical Address?
Section titled “Logical Address in Decimal to Physical Address?”p = LA / Page Size -> search in P.T -> fd = LA % Page Size
Physical Address (PA) = (f*page size) + d
p - page no.d - offset no.LA - Logical Address in DecimalPA - Physical AddressThrashing
Section titled “Thrashing”When the degree of multiprogramming (number of processes in memory) increases beyond a limit, thrashing occurs because of More processes share limited RAM, reducing frames per process, leading to frequent page faults.
CPU Utilisation (Efficiency)^|| . .| . ^ .| . | .|. | .| || thrashing|└────────────────────────────> Degree of MultiprogrammingAfter thrashing, the pages in main memory become insufficient to store the required process pages, leading to frequent page faults and excessive swapping between RAM and disk, further degrading system performance.
- i.e. after thrashing -> CPU spend more time in Page fault Service, and less time in Process execution
File System
Section titled “File System”File -> file is named collection of related information== that is ==recorded on secondary storage
File Attributes
-
Name==, ==Extension==, ==Size==, ==Date==, ==Author, (Created, Modified & Accessed), (Attributes: Read-only, hidden), Default Program, Security Details
File System -> Module of OS which manages, controls and organizes files and related structures.
Disk Blocks -> smallest units of data storage on a disk, where files are divided and stored. Two time of Disk Block Formatting:
- Low Level (Physical) -> when manufactures making track and sectors on Disk
- High Level (Logical) -> Disk Partition and Making drives, Making blocks
File Allocation Methods -> determines how the disk’s blocks are assigned to store file data.
- Contiguous Allocation
Secondary Storage (Disk)┌─────────────┐│ [1] [2] [3] │ ┌────┬────┬───┐│ [4] [5] [6] │ ├────┼────┼───┤│ [7] [8] [9] │ └────┴────┴───┘└─────────────┘
Pefromance:1. Fragmentation: Internal, External : (2. Increase in File Size: Inflexible : (3. Type of access: Sequential, Random/Direct : )- Linked Allocation
Secondary Storage (Disk)┌──────────────┐│ [1]->[2] [3] │ ┌────┬────┬───┐│ [4] [5] [6] │ ├────┼────┼───┤│ [7] [8]<-[9] │ └────┴────┴───┘└──────────────┘-> A file is represented in Linked List Form i.e. Each Disk Block will keep address of Next Block of containing the file
Pefromance:1. Fragmentation: Internal : )2. Increase in File Size: Flexible : )3. Type of access: Only Sequential : (- Indexed Allocation
Secondary Storage (Disk)┌─────────────┐│ [1] [2] [3] │ ┌────┬────┬───┐│ [4] [5] [6] │ ├────┼────┼───┤│ [7] [8] [9] │ └────┴────┴───┘└─────────────┘ ⬊
Index Block ┌───┐ ├─6─┤ ├─12┤ ├─5─┤ └───┘
-> For a File, Array Store the All the Block representing file-> Used in Now a days computers
Pefromance:1. Fragmentation: Internal : )2. Increase in File Size: Flexible : )3. Type of access: Random Access : )Unix I-node Structure
Section titled “Unix I-node Structure”The inode (index node) is a data structure== in a ==Unix-style file system== that ==describes a file-system object such as a file or a directory.
Disk Blocks Index Index┌────┬────┬────┬────┬────┬────┬────┬────┐ ┌───┐ ┌───┐│ │ │ │ │ │ │ │ ──── │ |->│ | -> [] }└─|──┴──|─┴────┴──|─┴────┴────┴────┴────┘ │ | └───┘ -> [] } Indirect Access | | | │ |->┌───┐ -> [] } File blocks( ) ( ) | └───┘ │ | -> [] }Direct Access ┌───┐ └───┘blocks of file │ │-> [] } Indirect Access │ │-> [] } blocks of files └───┘-> [] } IndexQues: The index node (inode) of a Unix-like file system has 12 direct, one single-indirect and one double-indirect pointer. The disk block size is 4 kB, and the disk block addresses 32-bits long. The maximum possible file size is (rounded off to 1 decimal place) ❓ Ans:
disk block size = 4KBdisk block address = 32 bits = 4B
No. of indexes per block = 4KB/4B = 1K
File Size by Direct Index = 12*4KBFile Size by Single-Indirect Pointer = (1*1k*4KB)File Size by Double-Indifect Pointer = (1*1k*1k*4KB)
Max File Size = 12*4kB + (1*1k*4kb) + (1*1k*1k*4kb) = 48KB + 4 MB + 4GB = 4.004048 GBDisk Scheduling
Section titled “Disk Scheduling”Cylinder:
- Collection of tracks of same radius from all surfaces
- In Disk we store Data in Cylinder wise so for a seek time we can cover maximum Sectors
Disk Scheduling:
- Done by operating system to schedule I/O request arriving for the disk
Disk Scheduling Algorithms:
- FCFS (First Come First Serve)
- SSTF (Shortest Seek Time First)
- Scan
- C-Scan (Circular-Scan)
- Look
- C-Look (Circular-Look)
1. FCFS (First Come First Serve)
Assume:
- 200 Cylinder : 0 - 199
- Suppose the order of request is : 72, 160, 33, 130, 14, 6, 180
- The Read/Write arm is at 50
50
FCFS: ➡ 72, 160, 33, 130, 14, 6, 18050 -> 72 = abs(72-50) = 22 movements
FCFS: ➡ 160, 33, 130, 14, 6, 18072 -> 160 = abs(160-72) = 88 movements
FCFS: ➡ 33, 130, 14, 6, 180160 -> 33 = abs(33-160) = 127 movements
FCFS: ➡ 130, 14, 6, 18033 -> 130 = abs(130-33) =97 movements
FCFS: ➡ 14, 6, 180130 -> 14 = abs(14-130) = 116 movements
FCFS: ➡6, 18014 -> 6 = abs(6-14) = 8 movements
FCFS: ➡1806 -> 180 = abs(180-6) = 174 movements
Total no. of head movements = 22 + 88 + 127 + 97 + 116 + 8 + 174 = 632 movements- Advantages:
- Every request gets a fair chance
- No indefinite postponement
- Disadvantages:
- Does not try to optimize seek time
- May not provide the best possible service
2. SSTF (Shortest Seek Time First)
Assume:
- 200 Cylinder : 0 - 199
- Suppose the order of request is : 72, 160, 33, 130, 14, 6, 180
- The Read/Write arm is at 50
50 :
SSTF : 72, 160, ➡33, 130, 14, 6, 18050 -> 33 = (50-33) Movements
SSTF : 72, 160, 130, ➡14, 6, 18033 -> 14 = (33-14) Movements
SSTF : 72, 160, 130, ➡6, 18014 -> 6 = (14-6) Movements
SSTF : ➡72, 160, 130, 1806 -> 72 = (72-6) Moevements
SSTF : 160, ➡130, 18072 -> 130 = (130-72) Movements
SSTF : ➡160, 180130 -> 160 = (160-130) Movements
SSTF : ➡180160 -> 180 = (180-160) Movements
Total no. of head movements = (50-33) + (33-14) + (14-6) + (72-6) + (130-72) + (160-130) + (180-160) = (50-6) + (180-6) = 218 movements- Advantages:
- Average Response Time decrease
- Throughput Increase
- Disadvantages:
- Overhead to calculate seek time in advance
- Can cause Starvation for a request if it has higher seek time as compared to incoming requests
- Higher variance of Response time as SSTF favors only some requests
3. SCAN (Elevator):
- Initial Direction is given
- Goes up to last request track in one direction before changing direction
- Also Go to Initial Direction Edge
0(If going left) or199(if going right)
current at track no. -> 50minum/maximum requested track -> x
if going rightward : 50-------------->199 x <---------------------199
if going leftward :
0 <-------------50 0 ------------------>x- Advantage:
- High throughput
- Low Variance of Response time
- Average Response time
- Disadvantage:
- Long waiting time for request for location just visited by disk arm
4. LOOK
- Same as Scan, but not go to the edge track if not requested
current at track no. -> 50minum requested track -> xminmaximum requested track -> xmax
if going rightward : 50----------->xmax xmin <---------------xmax
if going leftward :
xmin <---------50 xmin ------------------>xmax5. C-SCAN
- Initial Direction is given
- Does not Change Direction
- Also Go to Edge Track
0as well as199
current at track no. -> 50minum/maximum requested track -> x
if going rightward : ↷ 50-------------->199 0 ---------->x
if going leftward : ↶ 0 <-------------50 <------------199- Advantage
- Provides more uniform wait time compared to SCAN
6. C-Look
- Same as C-Scan, but not go to the edge track if not requested
current at track no. -> 50minum requested track -> xminmaximum requested track -> xmaxminimum requested track>50 or maximum requested track<50 -> x
if going rightward : ↷ 50---------->xmax xmin------->x
if going leftward : ↶ xmin<---------50 x<------------xmax---x---
Section titled “---x---”Completed ✅