Skip to content

OS Important Questions (PSU)

Ques. If a CPU takes 10 ms to decide to execute a process of 100 ms, approximately what percentage of time will be wasted by CPU in scheduling work?

- Scheduling time (overhead) = 10 ms
- Actual process execution time = 100 ms
- Total time taken = 10 ms (scheduling) + 100 ms (execution) = 110 ms
- % Time wasted in Scheduling = (10/110) x 100 = 9.09 % βœ…

Ques. In a multithreaded process, threads Do not Share What ? ⭐

Multithreading – Shared vs Private Data:

Shared Among Threads (within the same process):

- Code Section β†’ All threads share the same program code.
- Data Section β†’ Global/static variables are shared.
- Files β†’ File descriptors opened by one thread are visible to others.

Not Shared (Private to Each Thread):

- Stack β†’ Each thread has its own stack for function calls, local variables, return addresses, etc.

Ques. Preemptive scheduling was introduced in which Window Version

Preemptive Scheduling in Windows

  • Preemptive scheduling allows the OS to interrupt a running process and switch the CPU to another process based on priority or time slice. It enables true multitasking.
  • Ans: Window 95 βœ…
  • Windows NT 3.1 : It had full preemptive multitasking from the start, but Targeted enterprise systems, not mainstream consumers. ⭐
  • Windows 95 : It was the first consumer Windows OS to bring preemptive multitasking (for 32-bit apps).

Windows OS Evolution & Scheduling Type

Windows VersionRelease YearScheduling TypeNotes
Windows 3.01990Cooperative Scheduling- 16-bit only- Each process voluntarily yields control
Windows NT 3.11993Fully Preemptive Scheduling- First NT version- 32-bit OS with full preemptive multitasking
Windows 951995Hybrid: Preemptive (32-bit), Cooperative (16-bit)- First to support preemptive multitasking for 32-bit apps- Legacy support for 16-bit
Windows NT 4.01996Fully Preemptive Scheduling- GUI similar to Win95- Better performance and stability
Windows 981998Same as Windows 95- Improved multitasking for 32-bit- Still cooperative for 16-bit

Ques. Which of the following statements is (True/False)

Interrupts and Interrupt Vector Table

What is an Interrupt?

  • An interrupt is a signal to the CPU indicating an event that needs immediate attention.

  • Interrupts pause the current execution to run a specific routine called an interrupt handler. ⭐

(i) Device controllers raise an interrupt by asserting a signal on the interrupt request line βœ…True

  • Device controllers (e.g., keyboard, disk) use a dedicated line (IRQ – Interrupt Request Line) to notify the CPU of an event.

(ii) Interrupt vector contains the memory addresses of specialized interrupt handlers βœ…True

  • An interrupt vector is a table that stores pointers (addresses) to the corresponding interrupt service routines (ISRs) ⭐

  • On interrupt, the CPU uses this table to locate and execute the right handler.

Ques. Which of the following OS/process statements is (True/False)

OS Memory Access and Protection

  • CPU can directly access main memory and registers βœ…True
    Registers are the fastest memory, and main memory is accessed via address buses. βœ…True
  • Illegal memory access results in trap interruptTrap is a synchronous software interrupt triggered by events like invalid memory βœ…True

  • Memory protection among processes is implemented using base and limit registers βœ…True
    Base register holds the start address, limit register defines the size. Ensures a process accesses only its own memory.
  • Memory Address Register (MAR) stores the logical address ❌ False MAR stores the physical address for memory access. Logical address is translated before reaching MAR.

πŸ‘‰ Memory Access & Protection Notes ⭐

1. Logical Address vs Physical Address

  • Logical Address: Address generated by the CPU.
  • Physical Address: Actual address in RAM.
  • Logical β†’ Physical translation is handled by MMU (Memory Management Unit).

2. Memory Address Register (MAR)

  • Holds physical address to access RAM.
  • Works with Memory Buffer Register (MBR) to read/write data.

3. Memory Protection

  • Prevents a process from accessing memory of another process.
  • Implemented using:
    • Base Register: Holds starting address of process memory.

    • Limit Register: Holds the range (size) of memory allocated.

4. Trap Interrupt

  • Synchronous interrupt caused by:
    • Illegal memory access
    • Divide by zero
    • Invalid instruction
  • CPU transfers control to OS interrupt handler.

5. CPU Access Capabilities

  • CPU can directly access:
    • Registers (fastest)
    • Main Memory (RAM)
  • Cannot access secondary memory directly== (==needs I/O controller).

6. Protection Mechanisms

  • Segmentation: Divides process into segments (code, data, stack).
  • Paging: Divides both process and physical memory into fixed-size units (pages and frames).
  • Page Table: Maps logical to physical addresses.
ConceptDescription
Logical AddressGenerated by CPU
Physical AddressActual location in RAM
MAR (Memory Address Register)Stores physical address for access
Trap InterruptTriggered by invalid memory actions
Base & Limit RegistersUsed for enforcing process-level memory protection
CPU Memory AccessCan directly access registers & main memory
MMUTranslates logical to physical address

Q60. What will be the Effective Access Time in Demand Paging

Demand Paging - Effective Access Time

1. Inputs for Total Time to Access Memory:
- Memory access time (ma) = 300 ns = 0.003 ms
- Page fault rate (p) = 5% = 0.05
2. Inputs for Page Fault Service Time:
- If page to be replaced is:
- Not modified (40%) β†’ service time = 10 ms
- Modified (60%) β†’ service time = 15 ms

Page Fault Service Time (PFST)

Terminal window
PFST = (%Modified Γ— Modified_Time) + (%Not_Modified Γ— Not_Modified_Time)
Page fault service time = 0.6 Γ— 15 + 0.4 Γ— 10 = 13 ms

Effective Access Time (EAT) ⭐

Terminal window
EAT = (1 - p) Γ— (ma) + p Γ— (Page fault service time)
EAT = (1 - 0.05) Γ— 0.0003 + 0.05 Γ— 13
= 0.95 Γ— 0.0003 + 0.05 Γ— 13
= 0.000285 + 0.65
= 0.650285 ms
=> 650.285 ΞΌs βœ…

Ques. Which of the following is a valid β€˜C’ structure to represent process control block in Linux operating system?

Process Control Block in Linux

  • PCB (Process Control Block): Stores all info about a process (state, registers, PID, memory, etc.)
  • In Linux, the PCB is represented using the structure:
struct task_struct βœ…
  • It includes:
    • Process ID (PID)
    • Process state
    • Scheduling info
    • Pointers to parent/child processes
    • CPU context, file descriptors, memory info, etc.

task_struct is the core data structure used by the Linux kernel to manage processes.

Invalid options: ❌

  • job_struct – Not used in Linux PCB
  • program_struct – Not defined
  • process_struct – Incorrect name

Ques. Which of the following are valid model to represent relationship between user threads and kernel threads?

User Threads vs Kernel Threads – Mapping Models

  • A thread is the smallest unit of CPU execution within a process. It defines a single sequence of instructions.

  • User Threads: Managed in user space by user-level libraries (fast, but kernel doesn’t know them).
  • Kernel Threads: Managed by the OS kernel (can be scheduled by the OS on CPUs).

-> To run user threads on the CPU, they must eventually use kernel threads ⭐

1. One-to-One Model βœ… Valid Model

  • ( 1 user threads runs on 1 kernel threads)
  • 1 user thread β†’ 1 kernel thread
  • True parallel execution possible (if multiple cores).
  • More memory overhead (1 kernel thread per user thread).
  • Example: Windows, Linux (POSIX threads).
User Threads: T1 T2 T3
|| || ||
Kernel Threads: K1 K2 K3

2. Many-to-One Model βœ… Valid Model

  • ( n user threads share 1 kernel thread)
  • Many user threads β†’ 1 kernel thread
  • If one thread blocks, all are blocked (since only 1 kernel thread).
  • Fast thread switching (done in user space), but no real parallelism.
  • Example: Java green threads, some older systems.
User Threads: T1 T2 T3
\ | /
[Kernel Thread K1]

3. Many-to-Many Model βœ… Valid Model

  • ( n user threads are scheduled over m kernel threads,)
  • Many user threads β†’ Many kernel threads (less or equal)
  • Kernel can schedule some threads in parallel, block others.
  • Combines benefits of above two models.
  • Example: Solaris, modern threading libraries.
User Threads: T1 T2 T3 T4
|| | |
Kernel Threads: K1 K2

4. One-to-Many Model ❌ Invalid Model

  • ( 1 user threads are scheduled over m kernel threads) ❌
  • 1 user thread β†’ multiple kernel threads ❌
  • Illogical: A single user thread can’t control or split into multiple kernel threads.
  • Not supported or implemented in any OS.
User Threads: T1 ❌
/ | \
Kernel Threads: K1 K2 K3

πŸ‘‰ User Threads, Kernel Threads & Mapping Model

**User vs Kernel Threads: **

FeatureUser ThreadKernel Thread
Managed byUser-level librariesOS Kernel
SpeedFast (no kernel call)Slower (system call overhead)
SchedulingDone in user spaceDone by OS
BlockingOne blocked thread can block allOnly the blocked thread is paused
Visible to OS?NOYES
Parallelism?DependsYES

Role in Models :

  • Thread-to-kernel mapping models define how efficiently threads utilize system resources (CPU, memory, parallelism).
  • Example:
    • One-to-One model β†’ true parallelism
    • Many-to-One model β†’ low overhead but no parallelism
    • Many-to-Many model β†’ best of both worlds
Mapping TypeDescriptionManaged By
One-to-One1 user thread runs on 1 kernel threadKernel (OS)
Many-to-OneMany user threads share 1 kernel threadUser-level library
Many-to-Manyn user threads over m kernel threads (n β‰₯ m)Both

Note: β€œUser-level threads are not managed by kernel in many-to-one model because The kernel sees only 1 thread, so it can’t manage individual user threads. ⭐

Ques. Which of the following OS/process statements is TRUE

  1. Kernel level threads are managed by operating system. βœ…
  2. A heavy weight process has multiple threads of execution ❌
    • A heavyweight process is a traditional process with its own memory space and a single thread of execution
    • A multithreaded process is a single program that can execute multiple tasks concurrently by using multiple threads
  3. here are three types of threads: user, kernel, and system level threads. ❌
    • There are Two standard types of Threads - User level and Kernel Levl.
  4. User level threads are managed by kernel** ❌
    • User-level threads are managed by user-space libraries, not by the OS/kernel directly.

Ques. Which of the following statements is/are TRUE

  • Execution-time address binding generates different logical and physical addresses. βœ…
    • Execution-time address binding (done by MMU) means:
    • Logical address β‰  Physical address
    • Logical addresses are translated to physical during execution
  • The user program can access the physical address. ❌
    • A user program cannot access physical addresses directly.
      It always deals with logical (virtual) addresses.
      The OS + MMU translates them to physical addresses.

Address Binding and Memory Management

Address Binding TimeLogical = Physical?Example
Compile-timeβœ… Yes (if fixed)No relocation allowed
Load-time❌ NoLogical β†’ Physical during load
Execution-time❌ YesDone using MMU (hardware)

Only OS or kernel mode code can directly access physical memory.
User programs work with logical/virtual addresses only.

Ques. The CPU fetches instructions from memory according to the value of ------ ?

Program Counter βœ…

  • The Program Counter (PC) holds the address of the next instruction.
  • The CPU uses this address to fetch the instruction from memory.
  • The fetched instruction is then loaded into the Instruction Register (IR) for decoding and execution.
RegisterRole
Program CounterHolds address of next instruction (used to fetch)
Instruction RegisterHolds the current fetched instruction (after fetching)
Status RegisterStores flags (zero, carry, overflow, etc.)
Data RegisterTemporarily holds data being processed

**Ques. Valid Metrics to Compare CPU Scheduling Algorithms:

  1. CPU Utilization – % of time CPU is busy
  2. Throughput – No. of processes completed per unit time
  3. Turnaround Time – Completion time βˆ’ Arrival time
  4. Waiting Time – Turnaround time βˆ’ Burst time
  5. Response Time – First response βˆ’ Arrival time
  6. Fairness (optional) – How equally processes are treated

What is the primary purpose of a cache memory?

- TO store frequently accessed data and instructions to speed up the operation Of the
computer βœ…

What is a segmentation fault? ⭐

An error caused by accessing memory that the CPU cannot physically address βœ…

Which of the following scheduling algorithms can lead to starvation?

- Priority Scheduling βœ…
- Shortest Job Next (SJN)

Which of the following is true about deadlocks?

- Deadlocks can occur when the four necessary conditions (mutual exclusion, hold
and wait, no preemption, circular wait) are satisfied βœ…

When would the SCAN disk scheduling algorithm exhibit starvation?

When the disk arm repeatedly oscillates back and forth