Threads MCQs — Operating Systems

Threads MCQs — Operating Systems

  1. Which of the following is not typically shared by threads of the same process?
    A) Code segment
    B) Global variables (heap)
    C) File descriptors
    D) Thread stack
    Answer: D.
    Solution: Each thread has its own stack; other items are shared.
  2. A user-level thread library schedules threads in user space. Which is a main disadvantage?
    A) Fast creation
    B) Kernel unaware of threads → blocking syscalls block entire process
    C) Low overhead for context switch
    D) Portability
    Answer: B.
    Solution: Kernel schedules the process as a single entity; a blocking syscall blocks all threads.
  3. Which of the following is true about kernel-level threads?
    A) Scheduler sees only processes, not threads
    B) Each thread can be scheduled independently by the kernel
    C) Thread operations are always cheaper than user-level threads
    D) Kernel threads cannot share memory
    Answer: B.
    Solution: Kernel knows about threads and schedules them individually.
  4. Which model maps many user threads to a single kernel thread?
    A) One-to-one
    B) Many-to-one
    C) Many-to-many
    D) None-to-one
    Answer: B.
    Solution: Many user threads are multiplexed onto a single kernel thread.
  5. Which mapping model allows true parallelism on multiprocessors and good concurrency?
    A) Many-to-one
    B) One-to-one
    C) Two-to-one
    D) Virtual-to-physical
    Answer: B.
    Solution: One-to-one maps every user thread to a kernel thread → parallel scheduling.
  6. In a many-to-many threading model, what is the role of the scheduler?
    A) Kernel scheduler schedules only processes
    B) Both user-level and kernel-level scheduling can be used
    C) Only user-level scheduling possible
    D) No scheduling occurs
    Answer: B.
    Solution: Many-to-many supports multiplexing with user and kernel schedulers cooperating.
  7. Which of the following is not an advantage of threads?
    A) Faster context switching vs processes
    B) Easier sharing of data
    C) Isolation like separate address spaces
    D) Lower resource consumption
    Answer: C.
    Solution: Threads share an address space; they do not provide isolation.
  8. What is a lightweight process usually referring to?
    A) Thread
    B) Kernel process
    C) Zombie process
    D) Daemon
    Answer: A.
    Solution: Lightweight process = thread; less overhead than full process.
  9. Which of the following thread operations is typically the cheapest?
    A) fork()
    B) Creating a new thread (pthread_create)
    C) exec()
    D) spawn process
    Answer: B.
    Solution: Creating a thread usually cheaper than creating a process.
  10. In POSIX threads, which function sets a thread to be joinable?
    A) pthread_create default is joinable
    B) pthread_detach()
    C) pthread_joinable()
    D) pthread_setdetachstate(PTHREAD_CREATE_JOINABLE)
    Answer: D.
    Solution: Use pthread_attr_setdetachstate or pthread_setdetachstate to set detach state.
  11. If a thread is detached, then:
    A) Another thread must call pthread_join()
    B) It cannot be joined; resources are freed when it exits
    C) It blocks parent forever
    D) It converts to a process
    Answer: B.
    Solution: Detached threads free their resources on exit; join is not allowed.
  12. What does pthread_join(thread, &retval) do?
    A) Kills thread
    B) Waits for thread to terminate and fetches return value
    C) Makes thread detached
    D) Creates a copy of thread
    Answer: B.
    Solution: Join waits for termination and optionally retrieves return value.
  13. Which of the following is not async-signal-safe when used in signal handlers?
    A) write()
    B) pthread_mutex_lock()
    C) _exit()
    D) sig_atomic_t assignments
    Answer: B.
    Solution: pthread_mutex_lock is not async-signal-safe; avoid in signal handlers.
  14. When a multi-threaded process calls execve(), what happens to existing threads?
    A) They continue running in new program
    B) All other threads are destroyed; only calling thread remains and then replaced by exec image
    C) They convert to processes
    D) They become zombies
    Answer: B.
    Solution: exec replaces process image; other threads are terminated; only exec-calling thread remains then image replaced.
  15. Which of the following is used to synchronize threads at a barrier?
    A) Semaphores or pthread_barrier_wait
    B) pthread_mutex_unlock
    C) pthread_cond_broadcast only
    D) pthread_yield
    Answer: A.
    Solution: Barriers provide rendezvous; pthread_barrier_wait or semaphores can implement it.
  16. In producer-consumer with buffer size N, which condition ensures safety?
    A) Use only mutex
    B) Use only condition variables or semaphores to coordinate full/empty counts plus mutex
    C) Busy waiting always best
    D) No synchronization required
    Answer: B.
    Solution: Need mutual exclusion and counters (empty/full) via semaphores or cond vars.
  17. A race condition arises when:
    A) Programs produce identical outputs
    B) Correctness depends on relative timing of threads
    C) Threads never share data
    D) Threads are ordered strictly
    Answer: B.
    Solution: Race occurs when interleavings affect correctness.
  18. Which primitive ensures mutual exclusion without busy waiting?
    A) Spinlock
    B) Mutex (blocking)
    C) Test-and-set in user space only
    D) Volatile variable
    Answer: B.
    Solution: Mutexes block the thread (sleep) rather than spin, avoiding busy waiting.
  19. A spinlock is preferable to a mutex when:
    A) Lock hold time is long
    B) Threads run on different machines
    C) Lock hold time is very short and context switch overhead high
    D) You need cross-process synchronization
    Answer: C.
    Solution: Spinning is OK for short waits; context switch cost would dominate otherwise.
  20. Which of the following locking methods may lead to priority inversion?
    A) Mutex without priority inheritance
    B) Spinlock with preemption disabled
    C) Read-write lock used correctly
    D) Lock-free algorithms
    Answer: A.
    Solution: Low-priority holder with high-priority waiter leads to inversion; priority inheritance solves it.
  21. What does a condition variable provide?
    A) Mutual exclusion directly
    B) A way to block until some condition becomes true, used with a mutex
    C) Atomic increment/decrement
    D) Thread creation
    Answer: B.
    Solution: Condition variables are used to wait for state changes while holding a mutex.
  22. Which is true for pthread_cond_wait(&cond, &mutex)?
    A) It assumes mutex unlocked before call
    B) It atomically releases mutex and blocks the thread, re-acquires it when awakened
    C) It never re-acquires mutex
    D) It destroys mutex
    Answer: B.
    Solution: pthread_cond_wait atomically unlocks mutex and blocks, re-locking before returning.
  23. Reader-writer locks prefer readers. What’s the main risk?
    A) Starvation of readers
    B) Starvation of writers
    C) Deadlock always occurs
    D) No concurrency benefit
    Answer: B.
    Solution: If readers continuously acquire lock, writers may starve.
  24. A lock-free data structure means:
    A) No synchronization at all
    B) Uses atomic primitives (CAS, atomic instructions) to ensure progress without locks
    C) Always single threaded
    D) Uses mutexes in disguise
    Answer: B.
    Solution: Lock-free uses atomic operations to avoid locks while ensuring progress.
  25. Which atomic primitive can be used for lock-free stacks?
    A) Test-and-set only
    B) Compare-and-swap (CAS)
    C) sleep()
    D) pthread_join
    Answer: B.
    Solution: CAS enables atomic conditional update needed for lock-free push/pop.
  26. What is a wait-free algorithm?
    A) Every operation completes in a bounded number of steps regardless of other threads
    B) Lock-based but fair
    C) Uses busy-wait forever
    D) Equivalent to spinlocks
    Answer: A.
    Solution: Wait-free guarantees per-thread progress in bounded steps.
  27. Which hazard is caused by non-atomic read-modify-write on shared counters?
    A) Deadlock
    B) Lost update (race)
    C) Livelock
    D) Starvation
    Answer: B.
    Solution: Without atomic ops, increments can overwrite each other — lost updates.
  28. Which approach avoids both deadlocks and starvation in dining philosophers?
    A) All pick left fork first
    B) Resource hierarchy (order forks by number) or limit concurrency
    C) Random retries only
    D) Busy waiting forever
    Answer: B.
    Solution: Enforce ordered resource acquisition or limit simultaneous eaters.
  29. Thread cancellation in POSIX: asynchronous cancellation is dangerous because:
    A) It never works
    B) It can leave resources locked or inconsistent unless cleanup handlers used
    C) It always kills other processes
    D) It converts thread to process
    Answer: B.
    Solution: Async cancellation can interrupt at arbitrary points; must use cancellation points or handlers.
  30. Which of the following is a cancellation point in POSIX?
    A) pthread_mutex_lock
    B) write (blocking)
    C) pthread_create
    D) pthread_detach
    Answer: B.
    Solution: Certain blocking calls are cancellation points; write may be cancellation point when blocking.
  31. Signal delivery semantics in multi-threaded process: which thread receives an asynchronous signal sent to process?
    A) Kernel chooses any thread not blocking that signal
    B) First thread only
    C) All threads always get it
    D) Signals are not used in threads
    Answer: A.
    Solution: Process-directed signals are delivered to one eligible thread.
  32. Thread-directed signals (pthread_kill) are delivered to:
    A) A specific thread
    B) All threads
    C) The process leader only
    D) Random process
    Answer: A.
    Solution: pthread_kill directs the signal to a particular thread.
  33. Which function sets the CPU affinity of a thread (Linux)?
    A) sched_setaffinity (thread-specific via tid)
    B) pthread_setaffinity_np
    C) setaffinity
    D) bind_cpu
    Answer: B (or A using tid).
    Solution: pthread_setaffinity_np (non-portable) or sched_setaffinity with thread id.
  34. Which problem is solved by thread-local storage (TLS)?
    A) Sharing all variables
    B) Providing per-thread copies of global-like variables to avoid synchronization
    C) Eliminating stacks
    D) Replacing mutexes
    Answer: B.
    Solution: TLS gives each thread private data without locking.
  35. Which of the following is true about user-level threads (ULT) and kernel-level threads (KLT)?
    A) ULT are scheduled by kernel
    B) Switching between ULT does not invoke kernel (fast)
    C) ULT can use multiple CPUs
    D) KLT cannot be preempted
    Answer: B.
    Solution: ULT scheduled in user space; kernel unaware so switching can be fast but cannot use multiple CPUs.
  36. Which mechanism allows threads within same process to make blocking syscalls without blocking others?
    A) Use kernel threads (one-to-one)
    B) Use many-to-one model
    C) Use single-threaded process model
    D) Use signal handlers only
    Answer: A.
    Solution: Kernel threads ensure blocking call blocks only that kernel thread.
  37. In a mixed model, what happens when a user-level thread blocks?
    A) Kernel blocks entire process always
    B) Mapper maps another user thread to kernel thread, or uses scheduler activations to handle blocking
    C) All threads are terminated
    D) Nothing
    Answer: B.
    Solution: Many-to-many or scheduler activations let another thread run when one blocks.
  38. Which of the following ensures mutual exclusion across multiple processes (not threads)?
    A) pthread_mutex_t with PTHREAD_PROCESS_SHARED attribute or POSIX semaphores (sem_open)
    B) pthread mutex default only works within process
    C) volatile keyword
    D) Thread-local storage
    Answer: A.
    Solution: Use inter-process synchronization primitives like process-shared mutexes or named semaphores.
  39. Which of these is not a threading bug?
    A) Data race
    B) Deadlock
    C) Live-lock
    D) Memory leak only in single thread program
    Answer: D.
    Solution: Memory leaks can occur anywhere; not specific threading bug.
  40. When two threads deadlock, which technique can resolve it without killing threads?
    A) Resource preemption (rollback), ordering resources, or detect & recover
    B) Ignore it
    C) Busy wait more
    D) Increase memory
    Answer: A.
    Solution: Deadlock can be prevented by ordering, or detected and recovered by preemption.
  41. A reentrant function is:
    A) One that uses global state only
    B) Safe to call concurrently or from interrupts because it does not rely on shared writable state
    C) Always uses locks internally
    D) None of the above
    Answer: B.
    Solution: Reentrant avoids static/global modifiable state or uses local storage.
  42. Which of the following functions is reentrant?
    A) strtok()
    B) strerror()
    C) strtol() (generally reentrant)
    D) getpwent()
    Answer: C.
    Solution: strtok and strerror are not reentrant; strtol is generally reentrant.
  43. Thread stacks must be:
    A) Shared among threads
    B) Separate for each thread
    C) Stored in kernel only
    D) Not required
    Answer: B.
    Solution: Each thread needs its own stack for local variables and return addresses.
  44. Which parameter affects default stack size for pthreads?
    A) System default or pthread_attr_setstacksize
    B) Only CPU speed
    C) pthread_yield
    D) pthread_detach only
    Answer: A.
    Solution: Use pthread attributes to set stack size; system also has default.
  45. Which thread state transition causes highest overhead?
    A) Ready → Running
    B) Running → Ready
    C) Thread creation (allocating stack, TCB)
    D) Voluntary yield
    Answer: C.
    Solution: Creating thread allocates resources; costlier than context switches.
  46. What is cooperative multithreading?
    A) Kernel preemption always
    B) Threads yield control voluntarily at well-known points
    C) Using interrupts only
    D) Threads scheduled randomly
    Answer: B.
    Solution: Cooperative scheduling expects threads to yield for fairness.
  47. Which of the following is an example of preemptive multithreading?
    A) Java threads on JVM that the OS schedules preemptively
    B) coroutines in single-threaded event loop
    C) Cooperative multitasking in old OS
    D) Manual yield only
    Answer: A.
    Solution: JVM uses native threads scheduled preemptively by OS.
  48. Which is true about green threads?
    A) Implemented by OS kernel
    B) User-level threads implemented in runtime (e.g., early Java green threads)
    C) Always use multiple cores
    D) Are hardware threads
    Answer: B.
    Solution: Green threads are user-space threads scheduled by runtime.
  49. Why are cancelability cleanup handlers important?
    A) To leak resources
    B) To free or release resources (mutexes, memory) when thread cancelled
    C) To speed up cancellation
    D) Not needed
    Answer: B.
    Solution: Cleanup handlers ensure consistent state on cancellation.
  50. Which operation causes thread to yield CPU voluntarily?
    A) pthread_yield() / sched_yield()
    B) pthread_exit only
    C) pthread_create
    D) pthread_join only
    Answer: A.
    Solution: Yield allows scheduler to run other threads.
  51. Which of these is typically implemented as per-thread data?
    A) errno in C library (thread-local)
    B) Global static counters shared across threads
    C) File descriptors duplicate per-thread
    D) Kernel memory mappings per-thread
    Answer: A.
    Solution: errno is thread-local to avoid races.
  52. Which synchronization primitive allows multiple readers but only one writer?
    A) Mutex
    B) Semaphore with count 1
    C) Read-write lock (pthread_rwlock)
    D) Spinlock
    Answer: C.
    Solution: RW-locks allow concurrency for readers, exclusive for writers.
  53. Which is true about condition variables and spurious wakeups?
    A) They never occur
    B) Must always use while loop to recheck condition after wait
    C) Use if() then wait; no need to recheck
    D) Condition variables enforce atomic condition checks
    Answer: B.
    Solution: Spurious wakeups possible — use while loops.
  54. Thread starvation prevention technique:
    A) Disable scheduling
    B) Use aging to boost priority gradually
    C) Use smaller stacks
    D) Use more locking
    Answer: B.
    Solution: Aging increases priority of waiting threads to prevent starvation.
  55. Which is true about thread preemption and critical sections?
    A) Preemption requires entering atomic hardware lock-free section only
    B) Must disable preemption or use locks to protect critical section in kernel context
    C) No need to protect critical section
    D) CRITICAL sections cannot exist in threads
    Answer: B.
    Solution: Kernel disables preemption or uses locks to protect critical regions.
  56. Which API is most portable for threads across POSIX systems?
    A) pthreads
    B) Windows threads
    C) Linux clone
    D) fork threads
    Answer: A.
    Solution: POSIX pthreads are standardized across UNIX-like systems.
  57. Which of the following is not a valid way to create a thread in C?
    A) pthread_create
    B) std::thread in C++
    C) fork (produces process, not thread)
    D) clone with CLONE_VM/CLONE_THREAD
    Answer: C.
    Solution: fork creates a new process, not a thread.
  58. Which is true about asynchronous cancellation and mutexes?
    A) If a thread is cancelled while holding a mutex, other threads may deadlock unless handled
    B) Mutexes are automatically released on cancel
    C) ASYNC cancellation does not affect mutexes
    D) pthread_mutex_unlock must be called by other thread
    Answer: A.
    Solution: Cancellation must run cleanup handlers to release mutexes.
  59. Thread pools are used primarily to:
    A) Reduce thread creation overhead by reusing a set of worker threads
    B) Increase stack sizes per request
    C) Replace locks
    D) Run only one job at a time
    Answer: A.
    Solution: Thread pools amortize creation/destruction costs.
  60. Which scheduling guarantee is important for real-time threads?
    A) Best-effort only
    B) Predictable bounded latency (deadline)
    C) FIFO always best
    D) No preemption allowed
    Answer: B.
    Solution: Real-time systems need deterministic response times and deadlines.
  61. Which function returns thread ID of calling thread in POSIX?
    A) pthread_self()
    B) getpid()
    C) gettid() (Linux-specific)
    D) thread_id()
    Answer: A.
    Solution: pthread_self returns pthread_t identifier.
  62. Which of these is not a thread scheduling policy in POSIX?
    A) SCHED_OTHER
    B) SCHED_FIFO
    C) SCHED_RR
    D) SCHED_RANDOM
    Answer: D.
    Solution: POSIX supports OTHER, FIFO, RR; RANDOM doesn’t exist.
  63. Priority inheritance protocol prevents which issue?
    A) Race condition
    B) Priority inversion
    C) Deadlock always
    D) Lost wakeups
    Answer: B.
    Solution: Priority inheritance temporarily raises lock holder priority to avoid inversion.
  64. What is the purpose of pthread_key_create and pthread_setspecific?
    A) Create thread-local storage keys and set thread-specific data
    B) Allocate kernel memory
    C) Create global variables
    D) Create semaphores
    Answer: A.
    Solution: They implement TLS via key/value per-thread.
  65. Which is a typical cause of livelock?
    A) Two threads repeatedly yielding to each other without making progress
    B) Deadlock with resource hold
    C) Memory leak
    D) Thread crash
    Answer: A.
    Solution: Livelock — active but no progress due to mutual yielding.
  66. In Java, synchronized methods provide what?
    A) Mutual exclusion on the instance or class monitor
    B) Atomic file I/O
    C) Thread creation
    D) Garbage collection
    Answer: A.
    Solution: synchronized uses intrinsic lock (monitor) for mutual exclusion.
  67. Which is false about recursive mutex?
    A) Allows same thread to lock multiple times
    B) Must be unlocked same number of times
    C) Prevents deadlock from re-entrant calls by same thread
    D) Recommended always for all locks
    Answer: D.
    Solution: Recursive mutexes are useful but overuse can hide design issues.
  68. Which pattern reduces contention on a hot lock?
    A) Lock striping (partitioning data) or use of finer-grained locks
    B) Larger single global lock always
    C) More spinning only
    D) Single-threaded approach
    Answer: A.
    Solution: Partition data to multiple locks to reduce contention.
  69. In thread safety, atomicity means:
    A) Whole program runs atomically
    B) Operation appears indivisible to other threads
    C) No threads exist
    D) Use of mutex always
    Answer: B.
    Solution: Atomic operations complete without observable interleaving.
  70. Which library provides lock-free atomic operations in C11?
    A) stdatomic.h
    B) string.h
    C) pthread.h only
    D) unistd.h
    Answer: A.
    Solution: C11 atomic types and functions in stdatomic.h.
  71. Which of the following is true about pthread_mutex_trylock?
    A) Blocks until lock acquired
    B) Returns immediately with error if lock not available
    C) Always returns EINVAL
    D) Kills other thread
    Answer: B.
    Solution: trylock is non-blocking and returns EBUSY if locked.
  72. Which option is best to implement a timeout while waiting on a condition?
    A) pthread_cond_wait only
    B) pthread_cond_timedwait with absolute timespec
    C) Use busy-wait and check time yourself
    D) Sleep only
    Answer: B.
    Solution: pthread_cond_timedwait supports timeouts.
  73. Which of the following is not required for pthread_mutex_t initialization?
    A) pthread_mutex_init with attr
    B) Static initialization PTHREAD_MUTEX_INITIALIZER
    C) Passing NULL mutex to functions
    D) Using default initialized mutex variable is allowed
    Answer: C.
    Solution: Passing uninitialized (NULL) mutex leads to errors.
  74. If a thread calls pthread_exit(), what happens to other threads?
    A) All terminate immediately
    B) Other threads continue; process ends only when all non-detached threads exit or process calls exit
    C) Process terminates at once always
    D) Threads are paused
    Answer: B.
    Solution: pthread_exit terminates calling thread; others keep running.
  75. In hybrid threading models (scheduler activations), what is a key benefit?
    A) Kernel informs user-level scheduler about blocking events to remap threads
    B) No need for synchronization
    C) Threads are immortal
    D) No context switching ever
    Answer: A.
    Solution: Scheduler activations bridge kernel and user schedulers.
  76. Which is true about thread priorities on a multi-core system?
    A) Higher priority thread always runs irrespective of core load
    B) Scheduling policies and affinity decide which threads run on which cores; priorities influence scheduling decisions per core
    C) Priorities ignored completely
    D) Priorities determine memory allocation only
    Answer: B.
    Solution: Priorities affect scheduling but OS policies and affinity matter.
  77. Which debugging tool is helpful for finding data races?
    A) Helgrind/ThreadSanitizer
    B) Valgrind only for memory leaks
    C) gdb only for single-threaded debugging
    D) top
    Answer: A.
    Solution: TSAN and Helgrind can detect races and threading bugs.
  78. Which of the following is true for atomic compare-and-swap (CAS)?
    A) It always succeeds on first try
    B) It compares value and swaps if equal; may require retries in loop for lock-free algorithms
    C) It creates a lock automatically
    D) It is not atomic
    Answer: B.
    Solution: CAS is atomic; lock-free algorithms often loop on CAS until success.
  79. Thread-safe lazy initialization often uses:
    A) Double-checked locking pattern with memory barriers or std::call_once
    B) Simple if-null then create without locking
    C) No synchronization needed
    D) Busy-wait only
    Answer: A.
    Solution: Double-checked locking must use proper memory fences; std::call_once recommended.
  80. Which is true about pthread_rwlock_tryrdlock?
    A) Blocks until writer unlocks
    B) Attempts to acquire read lock without blocking, returns error if cannot
    C) Always fails
    D) Converts reader to writer atomically
    Answer: B.
    Solution: try functions are non-blocking.
  81. Which threading model is most scalable on many-core systems?
    A) Many-to-one
    B) One-to-one (kernel threads) or well-implemented many-to-many with true kernel support
    C) Green threads only
    D) Single-threaded event loop always best
    Answer: B.
    Solution: Kernel-level mapping per thread or scalable many-to-many is better for multicore.
  82. Which API provides memory barriers and atomic fences in C++11?
    A) atomic_thread_fence
    B) pthread_barrier_wait
    C) pthread_mutex_lock
    D) std::mutex only
    Answer: A.
    Solution: atomic_thread_fence provides memory ordering guarantees.
  83. Which of the following is recommended to avoid long blocking in critical sections?
    A) Keep critical sections short and do heavy work outside locked region
    B) Acquire locks globally for entire program
    C) Use one global lock for everything
    D) No synchronization at all
    Answer: A.
    Solution: Minimize time spent holding locks to reduce contention.
  84. Which of following ensures fairness among threads waiting for a lock?
    A) FIFO queueing within lock implementation or ticket locks
    B) Spin forever
    C) Random unlocking
    D) No locking
    Answer: A.
    Solution: Ticket or queue locks provide fairness (first-come-first-served).
  85. What is a ticket lock?
    A) A fair spinlock where each waiter gets a ticket number and serves FIFO
    B) A mutex that uses tickets to buy locks
    C) A lock for IO only
    D) A recursive lock
    Answer: A.
    Solution: Ticket lock enforces FIFO order and avoids starvation.
  86. Which of the following can prevent deadlocks by resource ordering?
    A) Always acquire resources in increasing numeric order
    B) Acquire randomly
    C) Release then reacquire always
    D) Not using locks
    Answer: A.
    Solution: Resource ordering avoids circular wait condition.
  87. Which thread synchronization primitive allows counting multiple resources simultaneously?
    A) Binary semaphore (mutex)
    B) Counting semaphore (sem_init with value >1)
    C) Condition variable only
    D) Spinlock only
    Answer: B.
    Solution: Counting semaphores track available resources.
  88. Which of the following is true about atomic fetch_add in C++?
    A) Not atomic
    B) atomically adds and returns previous value; useful for lock-free counters
    C) Always locks a mutex internally
    D) Only for floats
    Answer: B.
    Solution: atomic fetch_add provides atomic increment semantics.
  89. Which technique helps in reducing false sharing?
    A) Padding shared structures so hot fields reside on separate cache lines
    B) Reducing thread count only
    C) Using only global variables
    D) Sharing pointer to same cache line always
    Answer: A.
    Solution: False sharing arises when threads modify adjacent fields in same cache line; padding avoids it.
  90. Which of these frameworks uses cooperative multitasking with coroutines?
    A) Go runtime uses goroutines (M:N model but goroutines are scheduled cooperatively in userland with preemption support)
    B) POSIX pthreads always cooperative
    C) C stdlib threads only
    D) None
    Answer: A.
    Solution: Goroutines are lightweight user-space threads with scheduler in runtime; historically cooperative but recent versions include preemption.
  91. What does pthread_once do?
    A) Ensures a one-time initialization routine runs once across threads
    B) Creates one thread only
    C) Locks a mutex forever
    D) Schedules once only function to run many times
    Answer: A.
    Solution: pthread_once guarantees function execution exactly once.
  92. Which of the following can be used to implement read-copy-update (RCU) patterns?
    A) Epoch-based reclamation plus atomic pointers and grace periods
    B) Mutex only
    C) Condition variables only
    D) Spin forever
    Answer: A.
    Solution: RCU uses readers that do not block writers; reclamation delayed until all readers done.
  93. When migrating threads between cores to balance load, what must be considered?
    A) Cache affinity (warmth) and synchronization cost
    B) Only thread priority
    C) Nothing; always move threads frequently
    D) Only memory size
    Answer: A.
    Solution: Moving threads may cause cache misses and synchronization issues; consider affinity.
  94. Which of these is an example of cooperative cancellation?
    A) Thread checks a shared flag periodically and exits if set
    B) pthread_cancel with asynchronous cancel type only
    C) Sending SIGKILL to thread only
    D) Not possible
    Answer: A.
    Solution: Cooperative cancellation requires thread to poll and exit cleanly.
  95. What is the difference between green threads and kernel threads regarding blocking syscalls?
    A) Green threads: blocking syscall blocks entire process; kernel threads: blocks only that kernel thread
    B) Both same behavior always
    C) Only green threads can make blocking syscalls
    D) Kernel threads cannot block
    Answer: A.
    Solution: Green threads are managed by runtime; kernel threads allow blocking per-thread.
  96. Which locking strategy is best when readers greatly outnumber writers?
    A) Reader-preference read-write lock with writer starvation prevention (or fair RW lock)
    B) Exclusive mutex for everything
    C) No lock only optimistic concurrency
    D) Spinlock always
    Answer: A.
    Solution: Use RW lock to maximize concurrency; ensure writers not starved.
  97. In thread pool design, what is the main tradeoff when choosing pool size?
    A) Too small → queueing and latency; too large → context switching and resource exhaustion
    B) No tradeoff: make as large as possible
    C) Always 1 is best
    D) Pool size doesn’t matter
    Answer: A.
    Solution: Balance CPU bound vs IO bound tasks and system resources.
  98. Which of the following is NOT a valid reason to use per-thread caching?
    A) Reduce contention on shared caches
    B) Reduce synchronization overhead
    C) Increase false sharing always
    D) Speed up frequent small allocations
    Answer: C.
    Solution: Per-thread caches reduce contention and are designed to avoid false sharing; not increase it (if designed well).
  99. What is the typical effect of enabling affinity for threads pinned to cores?
    A) Reduces schedule overhead and improves cache locality but can hurt load balancing if rigid
    B) Always improves performance regardless
    C) Prevents threads from running
    D) None
    Answer: A.
    Solution: Affinity improves locality but reduces flexibility to balance load.
  100. Which practice is recommended for writing thread-safe libraries?
    A) Document thread-safety guarantees clearly, avoid hidden global state, use fine-grained locking or lock-free algorithms, offer thread-safe and non-thread-safe variants of APIs
    B) Never document anything
    C) Use global variables without locks
    D) Use random sleeps as synchronization
    Answer: A.
    Solution: Clear contracts, minimal shared state, and proper synchronization are best practices.