Divide & Conquer
Divide & Conquer (Algorithm Design Techniques)
- Recurrence: (T(n)=2T(n/2)+n). By Master theorem, (T(n)=)
A) (O(n)) B) (O(n\log n)) C) (O(n^2)) D) (O(\log n))
Answer: B
Explanation: (a=2,b=2,f(n)=n). (n^{\log_b a}=n). Case 2 → (T(n)=\Theta(n\log n)). - Merge sort on array of size 4096 does worst-case comparisons ≈:
A) 4096 B) 4096·log₂(4096) ≈ 4096·12 = 49152 C) 2^12 D) 12
Answer: B
Explanation: Merge sort does Θ(n log n) comparisons; log₂4096=12. - Recurrence (T(n)=3T(n/3)+n) solves to:
A) (O(n\log n)) B) (O(n)) C) (O(n^{\log_3 3})) D) (O(n^2))
Answer: A
Explanation: (a=3,b=3\Rightarrow n^{\log_b a}=n). f(n)=n ⇒ case 2 → Θ(n log n). - QuickSort worst-case on sorted input (n elements) with naive pivot choice is:
A) (O(n\log n)) B) (O(n)) C) (O(n^2)) D) (O(\log n))
Answer: C
Explanation: Picking first or last as pivot yields degenerate partitions → quadratic. - Closest pair of points via divide & conquer runs in:
A) (O(n^2)) B) (O(n\log n)) C) (O(n\log^2 n)) D) (O(n))
Answer: B
Explanation: Sort by x once and merge-strip step is linear per level → O(n log n). - Recurrence (T(n)=4T(n/2)+n) gives order:
A) (O(n^2)) B) (O(n^{\log_2 4}=n^2)) C) both A and B same ✔ D) (O(n\log n))
Answer: C
Explanation: a=4,b=2 → (n^{\log_2 4}=n^2); f(n)=n smaller → dominated by n^2. - Karatsuba multiplication for two n-digit numbers runs in:
A) (O(n^2)) B) (O(n^{\log_2 3})\approx O(n^{1.585})) C) (O(n\log n)) D) (O(\log n))
Answer: B
Explanation: Karatsuba reduces to 3 multiplications on half-size → exponent log₂3. - Strassen matrix multiply for n×n (n power of 2) runs in:
A) (O(n^{2.807})) B) (O(n^3)) C) (O(n^2)) D) (O(n\log n))
Answer: A
Explanation: Strassen does 7 submultiplications on n/2 → exponent log₂7 ≈ 2.807. - Counting inversions via divide & conquer (merge-based) runs in:
A) (O(n^2)) B) (O(n\log n)) C) (O(n)) D) (O(\log n))
Answer: B
Explanation: Merge step counts cross inversions in O(n); recurrence gives n log n. - Recurrence (T(n)=T(0.9n)+O(1)) solves to:
A) (O(\log n)) B) (O(n)) C) (O(1)) D) (O(n\log n))
Answer: A
Explanation: Geometric shrink → number of levels Θ(log n). - Binary search in a sorted array of size 1,000,000 worst comparisons ≈:
A) 20 B) 10 C) 2 D) 100
Answer: A
Explanation: log₂(1e6) ≈ 20 → O(log n). - Recurrence (T(n)=2T(n/4)+\sqrt{n}): compare n^{log_b a}=n^{log_4 2}=n^{1/2}. So complexity =
A) (O(\sqrt{n}\log n)) B) (O(\sqrt{n})) C) (O(n)) D) (O(n^{3/2}))
Answer: A
Explanation: f(n)=Θ(n^{1/2}) equals n^{log_b a} → case 2 → Θ(n^{1/2} log n). - Divide & conquer algorithm that splits into 5 subproblems of size n/3 with no extra cost: (T(n)=5T(n/3)) order =
A) (O(n^{\log_3 5}\approx n^{1.464})) B) (O(n\log n)) C) (O(n^2)) D) (O(n))
Answer: A
Explanation: Pure power law: exponent log₃5 ≈1.464. - Median of medians selection algorithm worst-case time:
A) (O(n\log n)) B) (O(n)) C) (O(n^2)) D) (O(\log n))
Answer: B
Explanation: Deterministic linear-time selection uses 5-group partitioning → T(n)=O(n). - In divide & conquer, the recursion tree method: leaf-cost sum equals total work done by:
A) Base-case costs aggregated over leaves → sum of leaf costs
B) Only root cost matters
C) It’s irrelevant
D) Counts internal nodes only
Answer: A
Explanation: Recursion tree sums work at each node; leaves contribute their base-case costs. - Multiplying polynomials by naive D&C (split halves, 4 multiplies) gives recurrence (T(n)=4T(n/2)+O(n)) → order:
A) (O(n^2)) B) (O(n\log n)) C) (O(n^{\log_2 4}=n^2)) D) both A and C
Answer: D
Explanation: 4 subproblems of half-size → n^2. - Cooley–Tukey FFT divide-and-conquer computes DFT in:
A) (O(n\log n)) B) (O(n^2)) C) (O(n)) D) (O(\log n))
Answer: A
Explanation: FFT splits into two halves recursively → T(n)=2T(n/2)+O(n) → n log n. - For recurrence (T(n)=T(n-1)+T(n-2)+O(1)) (like naive Fibonacci), complexity is:
A) (O(2^n)) B) (O(n)) C) (O(n\log n)) D) (O(n^2))
Answer: A
Explanation: Branching factor >1 yields exponential growth ≈ φ^n. - Divide & conquer closest pair requires sorting by x then merging by y. If we re-sort by y at each step it becomes:
A) (O(n\log^2 n)) B) (O(n\log n)) if we carry sorted order through levels C) (O(n^2)) D) (O(n))
Answer: A (if re-sorting), B (if efficient) → correct intended: A if naive.
Explanation: Re-sorting at each recursion level costs extra log factor; efficient implementations avoid re-sorting to keep O(n log n). - Recurrence (T(n)=T(\alpha n)+T((1-\alpha)n)+O(n)) for constant 0<α<1 gives T(n)=:
A) (O(n\log n)) generally if splits balanced; worst-case O(n^2) if very unbalanced? Actually balanced → O(n log n).
Answer: (O(n\log n)) when α constant not too small.
Explanation: At each level linear cost, number of levels Θ(log n) when balanced. - Finding kth smallest by randomized pivot has expected time:
A) (O(n)) B) (O(n\log n)) C) (O(n^2)) D) (O(\log n))
Answer: A
Explanation: Quickselect expected linear; worst-case O(n^2) but randomized gives expected O(n). - Recurrence (T(n)=aT(n/b)+n^d) with (a=1,b=2,d=1) → T(n)=:
A) (O(n)) B) (O(n\log n)) C) (O(\log n)) D) (O(n^2))
Answer: B
Explanation: a=1 so n^{log_b a}=n^0=1; f(n)=n → polynomially larger than 1 → becomes Θ(n).
(Wait: correct: for a=1,b=2, n^{log_b a}=n^0=1; f(n)=n which is polynomially larger → case 3? Actually Master theorem case 3 when f(n)=Ω(n^{log_b a+ε}) i.e. f(n)=Ω(n^{0+ε}) yes, so T(n)=Θ(f(n))=Θ(n). So answer should be A. Fix:)
- Recurrence (T(n)=1\cdot T(n/2)+n). By Master theorem, (n^{\log_2 1}=1). Since f(n)=n is polynomially larger than 1, T(n)=Θ(n).
Answer: A
Explanation: Case 3 of Master theorem → T(n)=Θ(n). - Counting number of inversions by D&C on array of 1024 elements will take ≈:
A) 1024² operations B) 1024·log₂1024 ≈ 12288 operations (Θ(n log n)) C) 1024 only D) 2¹⁰ squared
Answer: B
Explanation: Merge-based count runs in Θ(n log n). - Recurrence (T(n)=2T(n/2)+n/\log n) yields:
A) (Θ(n\log n)) because f(n)=n/log n is slightly smaller than n so compare to n^{log_b a}=n → f(n)=n/ log n = n^{1}/log n = n divided by polylog → still Θ(n) up to polylog? Master theorem case 2 requires f(n)=Θ(n^c). Here f(n)=n/ polylog, which is smaller than n by factor polylog, so case 1? Actually f(n)=n / log n = Θ(n^{log_b a} / log n). This is case 2 borderline? It’s smaller than n by factor polylog so T(n)=Θ(n log log n). There’s a known result: T(n)=2T(n/2)+n/ log n leads to T(n)=Θ(n log log n).)
A) Θ(n log log n) B) Θ(n log n) C) Θ(n) D) Θ(n log^2 n)
Answer: A
Explanation: f(n)=n/ log n = n^{log_b a}/log n → borderline slower than n by log factor → yields Θ(n log log n).
- Recurrence (T(n)=T(n/2)+T(n/3)+O(n)) asymptotically is:
A) (O(n\log n)) B) (O(n)) C) (O(n^{1.5})) D) (O(n\log^2 n))
Answer: B
Explanation: Characteristic exponent α solving 1= (1/2)^α+(1/3)^α <1 for α=1? But since sum of fractions=5/6<1, tree has linear levels → total Θ(n). - In a D&C FFT with n=2^k, number of complex multiplications is about:
A) n log₂n B) n² C) n D) log n
Answer: A
Explanation: Cooley–Tukey does Θ(n log n) multiplications. - Karatsuba recurrence (T(n)=3T(n/2)+Θ(n)) → exponent = log₂3≈1.585 so complexity:
A) Θ(n^{1.585}) B) Θ(n^2) C) Θ(n log n) D) Θ(n)
Answer: A
Explanation: Master theorem yields n^{log_2 3} dominance. - Divide & conquer to find majority element (Boyer-Moore is linear) using D&C merges candidates runs in:
A) O(n log n) B) O(n) C) O(n^2) D) O(log n)
Answer: B
Explanation: Recurrence T(n)=2T(n/2)+O(1) yields O(n); D&C linear for majority. - Merge sort is stable while quicksort (in-place) is:
A) Always stable B) Not stable by default (unstable) C) Stable if implemented with extra memory only D) Both B and C
Answer: D
Explanation: In-place quicksort not stable; stability can be enforced with extra space. - Recurrence (T(n)=T(n/2)+n) solves to:
A) Θ(n) B) Θ(n log n) C) Θ(log n) D) Θ(n^2)
Answer: A
Explanation: Leaves sum dominated by root-level linear term but geometric series yields Θ(n). - For closest pair divide & conquer, the strip check examines at most how many points ahead in y-order?
A) 7 (constant) B) O(n) C) O(log n) D) n/2
Answer: A
Explanation: In planar geometry, only up to 7 neighbors need checking. - Multiplying two 1024-bit integers with Karatsuba vs grade-school; approximate speedup exponent suggests Karatsuba faster as n grows. True or False?
A) True B) False
Answer: A
Explanation: Karatsuba O(n^{1.585}) vs school O(n^2); for large n Karatsuba better. - For a recurrence (T(n)=aT(n/b)+n^d) if (d<\log_b a) then T(n)=:
A) Θ(n^{\log_b a}) (Case 1) B) Θ(n^d \log n) (Case 2) C) Θ(n^d) (Case 3) D) Θ(\log n)
Answer: A
Explanation: f(n) smaller than n^{log_b a} implies root-dominated tree → Θ(n^{log_b a}). - Recurrence (T(n)=2T(n/2)+n\log n) yields:
A) Θ(n\log^2 n) (since f(n)=n log n > n^{log_b a}=n by log factor) B) Θ(n\log n) C) Θ(n^2) D) Θ(n)
Answer: A
Explanation: Case 3 with extra polylog factor: T=Θ(f(n)) if regularity; actually because f(n)=n log n = n^{1} log n, and a=2,b=2 → n^{log_b a}=n, so f(n)=n log n = n^{log_b a} log n → multiply by log n → Θ(n log^2 n). - For D&C algorithm that splits into 9 subproblems of size n/3 with linear combine, exponent = log_3 9 = 2 → T(n)=Θ(n^2). True or False?
A) True B) False
Answer: A
Explanation: log_3 9 = 2, so n^2 dominates. - Search for peak (element greater than neighbors) in unimodal array using divide & conquer (binary search style) runs in:
A) O(log n) B) O(n) C) O(n log n) D) O(1)
Answer: A
Explanation: Compare middle with neighbor and recurse on side with greater neighbor → halving each step. - D&C counting number of pairs with distance ≤ K in 1D with sorting and recursion runs in:
A) O(n log n) B) O(n^2) C) O(n) D) O(log n)
Answer: A
Explanation: Similar to inversion count with merge-like counting. - For recurrence (T(n)=T(n-1)+O(\log n)) gives complexity:
A) Θ(n log n) B) Θ(n) C) Θ(log n) D) Θ(n\log\log n)
Answer: A
Explanation: Sum of log k over k up to n ≈ n log n. - Quickselect expected O(n) but worst-case O(n^2). Median-of-medians gives worst-case O(n) by guaranteeing pivot is at least 30th percentile. True or False?
A) True B) False
Answer: A
Explanation: Deterministic selection selects pivot guaranteeing constant fraction reduction. - Dividing into three parts of size ~n/3 with recurrence (T(n)=3T(n/3)+n) yields:
A) Θ(n log n) as n^{log_3 3}=n and f(n)=n → case 2 → n log n B) Θ(n) C) Θ(n^2) D) Θ(n^{log_3 3})
Answer: A
Explanation: a=3,b=3 → n^{log_b a}=n; f(n)=n → Θ(n log n). - In D&C convex hull (divide & conquer algorithm), merging two hulls can be done in:
A) O(h1 + h2) time (linear in hull sizes) B) O(n^2) C) O(n log n) D) O(1)
Answer: A
Explanation: Upper and lower tangents found by walking around hulls → linear. - Counting inversion recurrence via merge produces T(n)=2T(n/2)+Θ(n). Number of inversions for n distinct sorted descending is:
A) n(n−1)/2 B) n C) 0 D) n log n
Answer: A
Explanation: Every pair inverted in descending array → n choose 2. - D&C algorithm that splits into 8 subproblems of size n/2 with combine O(n) yields exponent log₂ 8 = 3 so complexity:
A) Θ(n^3) B) Θ(n^2) C) Θ(n\log n) D) Θ(n)
Answer: A
Explanation: a=8,b=2 → n^{log_2 8}=n^3. - Solving recurrence (T(n)=T(\sqrt{n})+1) by variable change m=log n gives depth ≈ log log n → T(n)=Θ(\log\log n). True or False?
A) True B) False
Answer: A
Explanation: Repeated sqrt halves log twice each level → log log n levels. - D&C for maximum subarray using divide and conquer gives recurrence (T(n)=2T(n/2)+Θ(1)) if careful; complexity:
A) Θ(n) B) Θ(n log n) C) Θ(n^2) D) Θ(log n)
Answer: A
Explanation: Merge step constant time if combine max suffix/prefix properly → linear overall. - Master theorem cannot be applied when f(n) is irregular (e.g., f(n)=n^{\log_b a}·log n·log log n). Then use recursion tree or Akra–Bazzi. True or False?
A) True B) False
Answer: A
Explanation: Master theorem covers many but not all f(n); advanced theorems handle irregularities. - For a recurrence (T(n)=T(n-1)+T(n-3)+Θ(1)) (nonuniform splits), asymptotic growth is exponential since characteristic equation >1 roots. True or False?
A) True B) False
Answer: A
Explanation: Recurrence with constant branching >1 generally yields exponential solutions. - In D&C, tail recursion (one recursive call last) can be optimized to iteration reducing stack. True or False?
A) True B) False
Answer: A
Explanation: Tail-call elimination converts recursion to loop saving space. - For divide & conquer selection of thresholds where subproblem sizes vary (like T(n)=T(n-1)+T(1)+Θ(n)), worst-case becomes Θ(n^2). True or False?
A) True B) False
Answer: A
Explanation: Linear-depth recursion with linear combine yields quadratic. - In a D&C for multiplying dense polynomials using FFT, complexity is Θ(n log n). Using naive convolution it’s Θ(n^2). True or False?
A) True B) False
Answer: A
Explanation: FFT reduces convolution to pointwise multiplication in O(n log n). - Recurrence (T(n)=T(α n)+T(β n)+Θ(n)) where α+β<1 yields T(n)=Θ(n). True or False?
A) True B) False
Answer: A
Explanation: Subproblems shrink sufficiently so linear combine across levels sums to linear. - For D&C that splits into 2 subproblems of size n−1: (T(n)=2T(n-1)+O(1)) gives exponential Θ(2^n). True or False?
A) True B) False
Answer: A
Explanation: Each level nearly doubles number of subproblems only slightly smaller → exponential. - For recurrence (T(n)=aT(n/b)+f(n)) if f(n)=Θ(n^{log_b a}·log^k n), then T(n)=Θ(n^{log_b a}·log^{k+1} n). True or False?
A) True (standard Master theorem case 2 variant) B) False
Answer: A
Explanation: When f matches n^{log_b a} times polylog, multiply by extra log factor. - In recursive matrix multiplication via D&C using block multiplication into 8 blocks (standard), complexity Θ(n^3). Using Strassen reduces constant exponent. True or False?
A) True B) False
Answer: A
Explanation: Standard block multiply yields 8 mults of size n/2 → n^3; Strassen reduces mults to 7. - D&C convex hull requires sorting points by x first: cost O(n log n) dominated by sort; hull combine linear → total O(n log n). True or False?
A) True B) False
Answer: A
Explanation: Sorting dominates; divide/merge hull linear. - For recurrence (T(n)=T(n/2)+T(n/3)+T(n/6)+Θ(n)), since fractions sum to 1, complexity is Θ(n log n). True or False?
A) True B) False
Answer: A
Explanation: When sum of fractions =1 and combine cost Θ(n), levels number Θ(log n) → n log n. - Dividing into subproblems of sizes n/2 ±1 (slightly imbalanced) usually doesn’t change asymptotic complexity of balanced case. True or False?
A) True B) False
Answer: A
Explanation: Constant-size imbalance doesn’t affect asymptotic exponent. - D&C algorithm to find majority element works by combining candidates from halves and verifying. Complexity O(n). True or False?
A) True B) False
Answer: A
Explanation: Recurrence 2T(n/2)+O(1) → O(n). - Recurrence (T(n)=2T(n/2)+n\log n) solved earlier yields Θ(n log^2 n). True or False?
A) True B) False
Answer: A
Explanation: f(n)=n log n = n^{log_b a} log n → leads to extra log factor. - For integer multiplication, Toom–Cook (generalization of Karatsuba) improves exponent further than Karatsuba. True or False?
A) True B) False
Answer: A
Explanation: Toom-k reduces exponent closer to 1 as k increases; asymptotically FFT best. - In D&C closest pair, if points have integer coordinates with bounded range, a bucketing approach gives O(n) time, bypassing D&C. True or False?
A) True B) False
Answer: A
Explanation: With bounded coordinate range you can use hashing/bucketing for linear time. - Solving recurrence (T(n)=T(n/2)+T(n/4)+T(n/8)+…+O(n)) where geometric series sums less than n implies T(n)=Θ(n). True or False?
A) True B) False
Answer: A
Explanation: Total work per level is O(n) but number of levels O(log n)?? Actually subproblem sizes sum to <n at each level so tree depth constant? For this specific infinite-like sum levels constant → overall Θ(n). Correct intended: True. - D&C selection algorithm using median of medians splits into groups of 7 rather than 5 yields still linear time but with different constants. True or False?
A) True B) False
Answer: A
Explanation: Any constant group size ≥5 works with different constants ensuring linear worst-case. - In D&C, if combine step costs Θ(n^2) while recursing into two halves, recurrence (T(n)=2T(n/2)+n^2) yields Θ(n^2). True or False?
A) True (since f(n)=n^2 larger than n^{log_b a}=n) → case 3 → T=Θ(n^2) B) False
Answer: A
Explanation: f dominates so T(n)=Θ(n^2). - For D&C to compute convex hull in higher dimensions (d >2), algorithms are more complex and asymptotically more expensive (e.g., O(n^{⌈d/2⌉})). True or False?
A) True B) False
Answer: A
Explanation: Convex hull complexity grows with dimension; algorithms become higher polynomial. - Recurrence (T(n)=T(n/2)+T(n/3)+T(n/5)+Θ(n)) has fractional sum less than 1 ⇒ T(n)=Θ(n). True or False?
A) True B) False
Answer: A
Explanation: If sum of sizes coefficients <1, linear-term dominates giving linear. - Merge sort’s space usage (auxiliary) is Θ(n) for top-down, but bottom-up in-place merge variants exist requiring O(1) extra but with slower combine. True or False?
A) True B) False
Answer: A
Explanation: In-place merging possible but more complicated and slower. - The Master theorem fails when recursion has non-constant number of subproblems depending on n (e.g., a(n) non-constant). True or False?
A) True B) False
Answer: A
Explanation: Master theorem assumes constant a and b; variable branching needs other methods. - For (T(n)=T(n-1)+Θ(1)) we get Θ(n). This is a degenerate divide & conquer with one subproblem. True or False?
A) True B) False
Answer: A
Explanation: Linear recursion yields linear time. - Muliplying polynomials via D&C using convolution via FFT is preferable when degree n large enough (threshold about hundreds). True or False?
A) True B) False
Answer: A
Explanation: FFT overhead pays off for sufficiently large n. - In D&C, the number of leaves in recursion tree times base cost equals total leaf cost. For balanced binary recurrences with leaf cost Θ(1), number of leaves ≈ n^{log_b a}. True or False?
A) True B) False
Answer: A
Explanation: Leaf count approximates a^{height} where height log_b n. - The recurrence (T(n)=3T(n/4)+n) yields complexity (Θ(n^{\log_4 3})≈Θ(n^{0.793})). True or False?
A) False — compute log_4 3≈0.79 <1 but f(n)=n larger → case 3 → Θ(n).
Answer: False
Explanation: Since n^{log_b a}=n^{0.79} and f(n)=n = larger poly power, T(n)=Θ(n). - For D&C that uses random pivot (QuickSort) average-case recurrence T(n)=2T(n/2)+Θ(n) gives Θ(n log n). True or False?
A) True B) False
Answer: A
Explanation: Expected balanced splits lead to n log n. - D&C algorithm to compute k-th order statistic by splitting into √n chunks (median of medians variant) gives linear time too. True or False?
A) True B) False
Answer: A
Explanation: Various grouping sizes yield linear time with appropriate analysis. - Recurrence (T(n)=T(\alpha n)+Θ(n)) with α in (0,1) gives Θ(n). True or False?
A) True B) False
Answer: A
Explanation: Shrinking subproblem plus linear combine yields linear time. - For D&C on graphs, divide by removing separator of size O(√n) (planar graphs) gives algorithms like separator-based shortest paths; complexity often improved. True or False?
A) True B) False
Answer: A
Explanation: Planar separators lead to divide-and-conquer with good bounds. - In recurrence (T(n)=2T(n/2)+n/\log n) earlier we concluded Θ(n log log n). So T(2^k) scales like 2^k·k/ log 2^k? The asymptotic form consistent. True or False?
A) True B) False
Answer: A
Explanation: Subtle polylog yields extra log log factor. - D&C convex hull naive merging yields O(n log n); incremental insertion can be worse in the worst case O(n^2). True or False?
A) True B) False
Answer: A
Explanation: Incremental algorithms can degrade on adversarial input. - For D&C median-of-medians selection using groups of 3 fails to guarantee linear time (group size too small). True or False?
A) True B) False
Answer: A
Explanation: Groups of 3 don’t guarantee constant fraction discarded; groups of 5 suffice. - Divide & conquer algorithms naturally parallelize because subproblems independent; parallel time reduces to depth of recursion. True or False?
A) True B) False
Answer: A
Explanation: Parallelism available across subproblems; critical path equals recursion depth. - D&C solution to integer multiplication (Schonhage–Strassen) achieves near-linear time O(n log n log log n) asymptotically (using FFT over rings). True or False?
A) True B) False
Answer: A
Explanation: Advanced algorithms improve exponents approaching near-linear. - For recurrence (T(n)=T(n/2)+T(n/3)+T(n/6)+1) where fractions sum to 1, with constant combine cost, T(n)=Θ(log n) or Θ(n)? Since combine cost constant and branching count accumulates → O(log n)?? Actually because sum of sizes equals n, but combine cost constant, leaves number grows? For a=3 subcalls but sizes fraction sum =1, the tree height Θ(log n) and total nodes Θ(n)? This is subtle. Safer to avoid ambiguous question. (Skip to next.)
- (replacement) Using D&C to compute matrix chain optimal parenthesization with naive recursion is exponential, but DP reduces to O(n^3). True or False?
A) True B) False
Answer: A
Explanation: Brute-force recursion enumerates exponentially many parenthesizations; DP memoizes polynomially. - In D&C, when combining solutions requires global communication (e.g., all-to-all), parallel efficiency drops. True or False?
A) True B) False
Answer: A
Explanation: Heavy combine step becomes bottleneck limiting parallel speedup. - For recurrence (T(n)=aT(n/b)+f(n)), if a< b^d then f dominates and T(n)=Θ(f(n)). True or False?
A) True (Master theorem case 3) B) False
Answer: A
Explanation: If f(n) polynomially larger than n^{log_b a}, f dominates. - D&C for matrix inversion often uses block inversion with recursion and runs asymptotically like matrix multiplication. True or False?
A) True B) False
Answer: A
Explanation: Block inversion reduces to matrix multiplications and solves with same exponent asymptotically. - Divide & conquer algorithm to compute skyline of buildings sorts by x then merges skylines in linear combine → overall O(n log n). True or False?
A) True B) False
Answer: A
Explanation: Merge skylines linear; sort dominates. - For T(n)=T(n/2)+T(n/3)+T(n/5)+n^0.5, since sum fractions <1 and f(n) sublinear, T(n)=Θ(n). True or False?
A) True B) False
Answer: A
Explanation: Shrinking splits make linear term sum to linear overall. - Divide & conquer algorithms sometimes use fallback to naive methods for small n (base case threshold) for constant factor improvement. True or False?
A) True B) False
Answer: A
Explanation: Practical implementations switch to insertion sort/naive multiply for small sizes. - For recurrence (T(n)=T(n-1)+T(n-1)+O(1)=2T(n-1)+O(1)) we get Θ(2^n). True or False?
A) True B) False
Answer: A
Explanation: Each level doubles subproblems causing exponential growth. - D&C for calculating convex polygon diameter (rotating calipers) after hull computed is linear; overall after sorting hull cost still O(n log n). True or False?
A) True B) False
Answer: A
Explanation: Hull O(n log n), diameter linear on hull size. - Recurrence with variable split sizes like T(n)=T(n- n^{1/2})+Θ(1) has depth ~ n^{1/2} → T(n)=Θ(n^{1/2}). True or False?
A) True B) False
Answer: A
Explanation: Each step reduces n by sqrt(n) roughly, giving sqrt levels. - In D&C, tail recursion optimization reduces space but not the number of operations. True or False?
A) True B) False
Answer: A
Explanation: Eliminates stack overhead but same total work. - For D&C on strings, suffix array construction can be done in Θ(n) with induced sorting or Θ(n log n) with D&C suffix sorting (DC3 algorithm achieves linear). True or False?
A) True B) False
Answer: A
Explanation: DC3 (Skew) algorithm is linear-time; simpler D&C variants are n log n. - For D&C exponentiation (binary exponentiation), recurrence T(n)=T(n/2)+O(1) gives Θ(log n) time. True or False?
A) True B) False
Answer: A
Explanation: Repeated squaring halves exponent each step. - D&C algorithm that does constant work per level and has Θ(log n) levels yields total Θ(log n). True or False?
A) True B) False
Answer: A
Explanation: Sum over levels constant times number of levels. - In D&C, if subproblems overlap substantially, memoization (DP) may be preferable to pure recursion to avoid repeated work. True or False?
A) True B) False
Answer: A
Explanation: Overlap suggests dynamic programming. - For recurrence (T(n)=2T(n/2)+\frac{n}{\log n}), earlier we got Θ(n log log n). So it grows faster than linear but slower than n log n. True or False?
A) True B) False
Answer: A
Explanation: Subtle polylog yields intermediate growth. - D&C on matrices for determinant via LU decomposition is typically O(n^3) dominated by matrix multiply; faster algorithms exist asymptotically. True or False?
A) True B) False
Answer: A
Explanation: Standard Gaussian elimination cubic; fast matrix methods reduce exponent. - For divide & conquer that always halves problem but has combine cost Θ(n^2), recurrence (T(n)=2T(n/2)+n^2) gives Θ(n^2). True or False?
A) True B) False
Answer: A
Explanation: f(n)=n^2 dominates n^{log_b a}=n, so T=Θ(n^2). - Final conceptual: The standard template for proving correctness of a divide & conquer algorithm is: show correctness on base case, assume correctness for subproblems (induction), and prove combine yields correct global solution. True or False?
A) True B) False
Answer: A
Explanation: Structural induction with combine step yields correctness proof.