OS Important Questions (PSU)
Ques. If a CPU takes 10 ms to decide to execute a process of 100 ms, approximately what percentage of time will be wasted by CPU in scheduling work?
- Scheduling time (overhead) = 10 ms- Actual process execution time = 100 ms- Total time taken = 10 ms (scheduling) + 100 ms (execution) = 110 ms- % Time wasted in Scheduling = (10/110) x 100 = 9.09 % β
Ques. In a multithreaded process, threads Do not Share What ? β
Multithreading β Shared vs Private Data:
Shared Among Threads (within the same process):
- Code Section β All threads share the same program code.- Data Section β Global/static variables are shared.- Files β File descriptors opened by one thread are visible to others.Not Shared (Private to Each Thread):
- Stack β Each thread has its own stack for function calls, local variables, return addresses, etc.Ques. Preemptive scheduling was introduced in which Window Version
Preemptive Scheduling in Windows
- Preemptive scheduling allows the OS to interrupt a running process and switch the CPU to another process based on priority or time slice. It enables true multitasking.
- Ans: Window 95 β
- Windows NT 3.1 : It had full preemptive multitasking from the start, but Targeted enterprise systems, not mainstream consumers. β
- Windows 95 : It was the first consumer Windows OS to bring preemptive multitasking (for 32-bit apps).
Windows OS Evolution & Scheduling Type
| Windows Version | Release Year | Scheduling Type | Notes |
|---|---|---|---|
| Windows 3.0 | 1990 | Cooperative Scheduling | - 16-bit only- Each process voluntarily yields control |
| Windows NT 3.1 | 1993 | Fully Preemptive Scheduling | - First NT version- 32-bit OS with full preemptive multitasking |
| Windows 95 | 1995 | Hybrid: Preemptive (32-bit), Cooperative (16-bit) | - First to support preemptive multitasking for 32-bit apps- Legacy support for 16-bit |
| Windows NT 4.0 | 1996 | Fully Preemptive Scheduling | - GUI similar to Win95- Better performance and stability |
| Windows 98 | 1998 | Same as Windows 95 | - Improved multitasking for 32-bit- Still cooperative for 16-bit |
Ques. Which of the following statements is (True/False)
Interrupts and Interrupt Vector Table
What is an Interrupt?
-
An interrupt is a signal to the CPU indicating an event that needs immediate attention.
-
Interrupts pause the current execution to run a specific routine called an interrupt handler. β
(i) Device controllers raise an interrupt by asserting a signal on the interrupt request line β True
- Device controllers (e.g., keyboard, disk) use a dedicated line (IRQ β Interrupt Request Line) to notify the CPU of an event.
(ii) Interrupt vector contains the memory addresses of specialized interrupt handlers β True
-
An interrupt vector is a table that stores pointers (addresses) to the corresponding interrupt service routines (ISRs) β
- On interrupt, the CPU uses this table to locate and execute the right handler.
Ques. Which of the following OS/process statements is (True/False)
OS Memory Access and Protection
- CPU can directly access main memory and registers β
True
Registers are the fastest memory, and main memory is accessed via address buses. β True -
Illegal memory access results in trap interruptTrap is a synchronous software interrupt triggered by events like invalid memory β True
- Memory protection among processes is implemented using base and limit registers β
True
Base register holds the start address, limit register defines the size. Ensures a process accesses only its own memory. -
Memory Address Register (MAR) stores the logical address β False MAR stores the physical address for memory access. Logical address is translated before reaching MAR.
π Memory Access & Protection Notes β
1. Logical Address vs Physical Address
- Logical Address: Address generated by the CPU.
- Physical Address: Actual address in RAM.
-
Logical β Physical translation is handled by MMU (Memory Management Unit).
2. Memory Address Register (MAR)
- Holds physical address to access RAM.
-
Works with Memory Buffer Register (MBR) to read/write data.
3. Memory Protection
- Prevents a process from accessing memory of another process.
- Implemented using:
-
Base Register: Holds starting address of process memory.
-
Limit Register: Holds the range (size) of memory allocated.
-
4. Trap Interrupt
- Synchronous interrupt caused by:
- Illegal memory access
- Divide by zero
- Invalid instruction
- CPU transfers control to OS interrupt handler.
5. CPU Access Capabilities
- CPU can directly access:
- Registers (fastest)
- Main Memory (RAM)
-
Cannot access secondary memory directly== (==needs I/O controller).
6. Protection Mechanisms
- Segmentation: Divides process into segments (code, data, stack).
- Paging: Divides both process and physical memory into fixed-size units (pages and frames).
- Page Table: Maps logical to physical addresses.
| Concept | Description |
|---|---|
| Logical Address | Generated by CPU |
| Physical Address | Actual location in RAM |
| MAR (Memory Address Register) | Stores physical address for access |
| Trap Interrupt | Triggered by invalid memory actions |
| Base & Limit Registers | Used for enforcing process-level memory protection |
| CPU Memory Access | Can directly access registers & main memory |
| MMU | Translates logical to physical address |
Q60. What will be the Effective Access Time in Demand Paging
Demand Paging - Effective Access Time
1. Inputs for Total Time to Access Memory: - Memory access time (ma) = 300 ns = 0.003 ms - Page fault rate (p) = 5% = 0.05
2. Inputs for Page Fault Service Time: - If page to be replaced is: - Not modified (40%) β service time = 10 ms - Modified (60%) β service time = 15 msPage Fault Service Time (PFST)
PFST = (%Modified Γ Modified_Time) + (%Not_Modified Γ Not_Modified_Time)Page fault service time = 0.6 Γ 15 + 0.4 Γ 10 = 13 msEffective Access Time (EAT) β
EAT = (1 - p) Γ (ma) + p Γ (Page fault service time)EAT = (1 - 0.05) Γ 0.0003 + 0.05 Γ 13 = 0.95 Γ 0.0003 + 0.05 Γ 13 = 0.000285 + 0.65 = 0.650285 ms
=> 650.285 ΞΌs β
Ques. Which of the following is a valid βCβ structure to represent process control block in Linux operating system?
Process Control Block in Linux
- PCB (Process Control Block): Stores all info about a process (state, registers, PID, memory, etc.)
- In Linux, the PCB is represented using the structure:
struct task_struct β
- It includes:
- Process ID (PID)
- Process state
- Scheduling info
- Pointers to parent/child processes
- CPU context, file descriptors, memory info, etc.
task_structis the core data structure used by the Linux kernel to manage processes.
Invalid options: β
job_structβ Not used in Linux PCBprogram_structβ Not definedprocess_structβ Incorrect name
Ques. Which of the following are valid model to represent relationship between user threads and kernel threads?
User Threads vs Kernel Threads β Mapping Models
-
A thread is the smallest unit of CPU execution within a process. It defines a single sequence of instructions.
- User Threads: Managed in user space by user-level libraries (fast, but kernel doesnβt know them).
- Kernel Threads: Managed by the OS kernel (can be scheduled by the OS on CPUs).
-> To run user threads on the CPU, they must eventually use kernel threads β
1. One-to-One Model β Valid Model
- (
1user threads runs on1kernel threads) - 1 user thread β 1 kernel thread
- True parallel execution possible (if multiple cores).
- More memory overhead (1 kernel thread per user thread).
- Example: Windows, Linux (POSIX threads).
User Threads: T1 T2 T3 || || ||Kernel Threads: K1 K2 K32. Many-to-One Model β Valid Model
- (
nuser threads share1kernel thread) - Many user threads β 1 kernel thread
- If one thread blocks, all are blocked (since only 1 kernel thread).
- Fast thread switching (done in user space), but no real parallelism.
- Example: Java green threads, some older systems.
User Threads: T1 T2 T3 \ | / [Kernel Thread K1]3. Many-to-Many Model β Valid Model
- (
nuser threads are scheduled overmkernel threads,) - Many user threads β Many kernel threads (less or equal)
- Kernel can schedule some threads in parallel, block others.
- Combines benefits of above two models.
- Example: Solaris, modern threading libraries.
User Threads: T1 T2 T3 T4 || | |Kernel Threads: K1 K24. One-to-Many Model β Invalid Model
- (
1user threads are scheduled overmkernel threads) β - 1 user thread β multiple kernel threads β
- Illogical: A single user thread canβt control or split into multiple kernel threads.
- Not supported or implemented in any OS.
User Threads: T1 β / | \Kernel Threads: K1 K2 K3π User Threads, Kernel Threads & Mapping Model
**User vs Kernel Threads: **
| Feature | User Thread | Kernel Thread |
|---|---|---|
| Managed by | User-level libraries | OS Kernel |
| Speed | Fast (no kernel call) | Slower (system call overhead) |
| Scheduling | Done in user space | Done by OS |
| Blocking | One blocked thread can block all | Only the blocked thread is paused |
| Visible to OS? | NO | YES |
| Parallelism? | Depends | YES |
Role in Models :
- Thread-to-kernel mapping models define how efficiently threads utilize system resources (CPU, memory, parallelism).
- Example:
- One-to-One model β true parallelism
- Many-to-One model β low overhead but no parallelism
- Many-to-Many model β best of both worlds
| Mapping Type | Description | Managed By |
|---|---|---|
| One-to-One | 1 user thread runs on 1 kernel thread | Kernel (OS) |
| Many-to-One | Many user threads share 1 kernel thread | User-level library |
| Many-to-Many | n user threads over m kernel threads (n β₯ m) | Both |
Note: βUser-level threads are not managed by kernel in many-to-one model because The kernel sees only 1 thread, so it canβt manage individual user threads. β
Ques. Which of the following OS/process statements is TRUE
- Kernel level threads are managed by operating system. β
- A heavy weight process has multiple threads of execution β
- A heavyweight process is a traditional process with its own memory space and a single thread of execution
- A multithreaded process is a single program that can execute multiple tasks concurrently by using multiple threads
- here are three types of threads: user, kernel, and system level threads. β
- There are Two standard types of Threads - User level and Kernel Levl.
- User level threads are managed by kernel** β
- User-level threads are managed by user-space libraries, not by the OS/kernel directly.
Ques. Which of the following statements is/are TRUE
- Execution-time address binding generates different logical and physical addresses. β
- Execution-time address binding (done by MMU) means:
- Logical address β Physical address
- Logical addresses are translated to physical during execution
- The user program can access the physical address. β
- A user program cannot access physical addresses directly.
It always deals with logical (virtual) addresses.
The OS + MMU translates them to physical addresses.
- A user program cannot access physical addresses directly.
Address Binding and Memory Management
| Address Binding Time | Logical = Physical? | Example |
|---|---|---|
| Compile-time | β Yes (if fixed) | No relocation allowed |
| Load-time | β No | Logical β Physical during load |
| Execution-time | β Yes | Done using MMU (hardware) |
Only OS or kernel mode code can directly access physical memory.
User programs work with logical/virtual addresses only.
Ques. The CPU fetches instructions from memory according to the value of ------ ?
Program Counter β
- The Program Counter (PC) holds the address of the next instruction.
- The CPU uses this address to fetch the instruction from memory.
- The fetched instruction is then loaded into the Instruction Register (IR) for decoding and execution.
| Register | Role |
|---|---|
| Program Counter | Holds address of next instruction (used to fetch) |
| Instruction Register | Holds the current fetched instruction (after fetching) |
| Status Register | Stores flags (zero, carry, overflow, etc.) |
| Data Register | Temporarily holds data being processed |
**Ques. Valid Metrics to Compare CPU Scheduling Algorithms:
- CPU Utilization β % of time CPU is busy
- Throughput β No. of processes completed per unit time
- Turnaround Time β Completion time β Arrival time
- Waiting Time β Turnaround time β Burst time
- Response Time β First response β Arrival time
- Fairness (optional) β How equally processes are treated
Companies Assessment Questions
Section titled βCompanies Assessment QuestionsβWhat is the primary purpose of a cache memory?
- TO store frequently accessed data and instructions to speed up the operation Of thecomputer β
What is a segmentation fault? β
An error caused by accessing memory that the CPU cannot physically address β
Which of the following scheduling algorithms can lead to starvation?
- Priority Scheduling β
- Shortest Job Next (SJN)Which of the following is true about deadlocks?
- Deadlocks can occur when the four necessary conditions (mutual exclusion, holdand wait, no preemption, circular wait) are satisfied β
When would the SCAN disk scheduling algorithm exhibit starvation?
When the disk arm repeatedly oscillates back and forth