Divide & Conquer
Divide & Conquer (Algorithm Design Techniques)
- Recurrence: (T(n)=2T(n/2)+n). By Master theorem, (T(n)=)
A) (O(n))โB) (O(n\log n))โC) (O(n^2))โD) (O(\log n))
Answer: B
Explanation: (a=2,b=2,f(n)=n). (n^{\log_b a}=n). Case 2 โ (T(n)=\Theta(n\log n)). - Merge sort on array of size 4096 does worst-case comparisons โ:
A) 4096โB) 4096ยทlogโ(4096) โ 4096ยท12 = 49152โC) 2^12โD) 12
Answer: B
Explanation: Merge sort does ฮ(n log n) comparisons; logโ4096=12. - Recurrence (T(n)=3T(n/3)+n) solves to:
A) (O(n\log n))โB) (O(n))โC) (O(n^{\log_3 3}))โD) (O(n^2))
Answer: A
Explanation: (a=3,b=3\Rightarrow n^{\log_b a}=n). f(n)=n โ case 2 โ ฮ(n log n). - QuickSort worst-case on sorted input (n elements) with naive pivot choice is:
A) (O(n\log n))โB) (O(n))โC) (O(n^2))โD) (O(\log n))
Answer: C
Explanation: Picking first or last as pivot yields degenerate partitions โ quadratic. - Closest pair of points via divide & conquer runs in:
A) (O(n^2))โB) (O(n\log n))โC) (O(n\log^2 n))โD) (O(n))
Answer: B
Explanation: Sort by x once and merge-strip step is linear per level โ O(n log n). - Recurrence (T(n)=4T(n/2)+n) gives order:
A) (O(n^2))โB) (O(n^{\log_2 4}=n^2))โC) both A and B same โโD) (O(n\log n))
Answer: C
Explanation: a=4,b=2 โ (n^{\log_2 4}=n^2); f(n)=n smaller โ dominated by n^2. - Karatsuba multiplication for two n-digit numbers runs in:
A) (O(n^2))โB) (O(n^{\log_2 3})\approx O(n^{1.585}))โC) (O(n\log n))โD) (O(\log n))
Answer: B
Explanation: Karatsuba reduces to 3 multiplications on half-size โ exponent logโ3. - Strassen matrix multiply for nรn (n power of 2) runs in:
A) (O(n^{2.807}))โB) (O(n^3))โC) (O(n^2))โD) (O(n\log n))
Answer: A
Explanation: Strassen does 7 submultiplications on n/2 โ exponent logโ7 โ 2.807. - Counting inversions via divide & conquer (merge-based) runs in:
A) (O(n^2))โB) (O(n\log n))โC) (O(n))โD) (O(\log n))
Answer: B
Explanation: Merge step counts cross inversions in O(n); recurrence gives n log n. - Recurrence (T(n)=T(0.9n)+O(1)) solves to:
A) (O(\log n))โB) (O(n))โC) (O(1))โD) (O(n\log n))
Answer: A
Explanation: Geometric shrink โ number of levels ฮ(log n). - Binary search in a sorted array of size 1,000,000 worst comparisons โ:
A) 20โB) 10โC) 2โD) 100
Answer: A
Explanation: logโ(1e6) โ 20 โ O(log n). - Recurrence (T(n)=2T(n/4)+\sqrt{n}): compare n^{log_b a}=n^{log_4 2}=n^{1/2}. So complexity =
A) (O(\sqrt{n}\log n))โB) (O(\sqrt{n}))โC) (O(n))โD) (O(n^{3/2}))
Answer: A
Explanation: f(n)=ฮ(n^{1/2}) equals n^{log_b a} โ case 2 โ ฮ(n^{1/2} log n). - Divide & conquer algorithm that splits into 5 subproblems of size n/3 with no extra cost: (T(n)=5T(n/3)) order =
A) (O(n^{\log_3 5}\approx n^{1.464}))โB) (O(n\log n))โC) (O(n^2))โD) (O(n))
Answer: A
Explanation: Pure power law: exponent logโ5 โ1.464. - Median of medians selection algorithm worst-case time:
A) (O(n\log n))โB) (O(n))โC) (O(n^2))โD) (O(\log n))
Answer: B
Explanation: Deterministic linear-time selection uses 5-group partitioning โ T(n)=O(n). - In divide & conquer, the recursion tree method: leaf-cost sum equals total work done by:
A) Base-case costs aggregated over leaves โ sum of leaf costs
B) Only root cost matters
C) It’s irrelevant
D) Counts internal nodes only
Answer: A
Explanation: Recursion tree sums work at each node; leaves contribute their base-case costs. - Multiplying polynomials by naive D&C (split halves, 4 multiplies) gives recurrence (T(n)=4T(n/2)+O(n)) โ order:
A) (O(n^2))โB) (O(n\log n))โC) (O(n^{\log_2 4}=n^2))โD) both A and C
Answer: D
Explanation: 4 subproblems of half-size โ n^2. - CooleyโTukey FFT divide-and-conquer computes DFT in:
A) (O(n\log n))โB) (O(n^2))โC) (O(n))โD) (O(\log n))
Answer: A
Explanation: FFT splits into two halves recursively โ T(n)=2T(n/2)+O(n) โ n log n. - For recurrence (T(n)=T(n-1)+T(n-2)+O(1)) (like naive Fibonacci), complexity is:
A) (O(2^n))โB) (O(n))โC) (O(n\log n))โD) (O(n^2))
Answer: A
Explanation: Branching factor >1 yields exponential growth โ ฯ^n. - Divide & conquer closest pair requires sorting by x then merging by y. If we re-sort by y at each step it becomes:
A) (O(n\log^2 n))โB) (O(n\log n)) if we carry sorted order through levelsโC) (O(n^2))โD) (O(n))
Answer: A (if re-sorting), B (if efficient) โ correct intended: A if naive.
Explanation: Re-sorting at each recursion level costs extra log factor; efficient implementations avoid re-sorting to keep O(n log n). - Recurrence (T(n)=T(\alpha n)+T((1-\alpha)n)+O(n)) for constant 0<ฮฑ<1 gives T(n)=:
A) (O(n\log n)) generally if splits balanced; worst-case O(n^2) if very unbalanced? Actually balanced โ O(n log n).
Answer: (O(n\log n)) when ฮฑ constant not too small.
Explanation: At each level linear cost, number of levels ฮ(log n) when balanced. - Finding kth smallest by randomized pivot has expected time:
A) (O(n))โB) (O(n\log n))โC) (O(n^2))โD) (O(\log n))
Answer: A
Explanation: Quickselect expected linear; worst-case O(n^2) but randomized gives expected O(n). - Recurrence (T(n)=aT(n/b)+n^d) with (a=1,b=2,d=1) โ T(n)=:
A) (O(n))โB) (O(n\log n))โC) (O(\log n))โD) (O(n^2))
Answer: B
Explanation: a=1 so n^{log_b a}=n^0=1; f(n)=n โ polynomially larger than 1 โ becomes ฮ(n).
(Wait: correct: for a=1,b=2, n^{log_b a}=n^0=1; f(n)=n which is polynomially larger โ case 3? Actually Master theorem case 3 when f(n)=ฮฉ(n^{log_b a+ฮต}) i.e. f(n)=ฮฉ(n^{0+ฮต}) yes, so T(n)=ฮ(f(n))=ฮ(n). So answer should be A. Fix:)
- Recurrence (T(n)=1\cdot T(n/2)+n). By Master theorem, (n^{\log_2 1}=1). Since f(n)=n is polynomially larger than 1, T(n)=ฮ(n).
Answer: A
Explanation: Case 3 of Master theorem โ T(n)=ฮ(n). - Counting number of inversions by D&C on array of 1024 elements will take โ:
A) 1024ยฒ operationsโB) 1024ยทlogโ1024 โ 12288 operations (ฮ(n log n))โC) 1024 onlyโD) 2ยนโฐ squared
Answer: B
Explanation: Merge-based count runs in ฮ(n log n). - Recurrence (T(n)=2T(n/2)+n/\log n) yields:
A) (ฮ(n\log n)) because f(n)=n/log n is slightly smaller than n so compare to n^{log_b a}=n โ f(n)=n/ log n = n^{1}/log n = n divided by polylog โ still ฮ(n) up to polylog? Master theorem case 2 requires f(n)=ฮ(n^c). Here f(n)=n/ polylog, which is smaller than n by factor polylog, so case 1? Actually f(n)=n / log n = ฮ(n^{log_b a} / log n). This is case 2 borderline? It’s smaller than n by factor polylog so T(n)=ฮ(n log log n). There’s a known result: T(n)=2T(n/2)+n/ log n leads to T(n)=ฮ(n log log n).)
A) ฮ(n log log n)โB) ฮ(n log n)โC) ฮ(n)โD) ฮ(n log^2 n)
Answer: A
Explanation: f(n)=n/ log n = n^{log_b a}/log n โ borderline slower than n by log factor โ yields ฮ(n log log n).
- Recurrence (T(n)=T(n/2)+T(n/3)+O(n)) asymptotically is:
A) (O(n\log n))โB) (O(n))โC) (O(n^{1.5}))โD) (O(n\log^2 n))
Answer: B
Explanation: Characteristic exponent ฮฑ solving 1= (1/2)^ฮฑ+(1/3)^ฮฑ <1 for ฮฑ=1? But since sum of fractions=5/6<1, tree has linear levels โ total ฮ(n). - In a D&C FFT with n=2^k, number of complex multiplications is about:
A) n logโnโB) nยฒโC) nโD) log n
Answer: A
Explanation: CooleyโTukey does ฮ(n log n) multiplications. - Karatsuba recurrence (T(n)=3T(n/2)+ฮ(n)) โ exponent = logโ3โ1.585 so complexity:
A) ฮ(n^{1.585})โB) ฮ(n^2)โC) ฮ(n log n)โD) ฮ(n)
Answer: A
Explanation: Master theorem yields n^{log_2 3} dominance. - Divide & conquer to find majority element (Boyer-Moore is linear) using D&C merges candidates runs in:
A) O(n log n)โB) O(n)โC) O(n^2)โD) O(log n)
Answer: B
Explanation: Recurrence T(n)=2T(n/2)+O(1) yields O(n); D&C linear for majority. - Merge sort is stable while quicksort (in-place) is:
A) Always stableโB) Not stable by default (unstable)โC) Stable if implemented with extra memory onlyโD) Both B and C
Answer: D
Explanation: In-place quicksort not stable; stability can be enforced with extra space. - Recurrence (T(n)=T(n/2)+n) solves to:
A) ฮ(n)โB) ฮ(n log n)โC) ฮ(log n)โD) ฮ(n^2)
Answer: A
Explanation: Leaves sum dominated by root-level linear term but geometric series yields ฮ(n). - For closest pair divide & conquer, the strip check examines at most how many points ahead in y-order?
A) 7 (constant)โB) O(n)โC) O(log n)โD) n/2
Answer: A
Explanation: In planar geometry, only up to 7 neighbors need checking. - Multiplying two 1024-bit integers with Karatsuba vs grade-school; approximate speedup exponent suggests Karatsuba faster as n grows. True or False?
A) TrueโB) False
Answer: A
Explanation: Karatsuba O(n^{1.585}) vs school O(n^2); for large n Karatsuba better. - For a recurrence (T(n)=aT(n/b)+n^d) if (d<\log_b a) then T(n)=:
A) ฮ(n^{\log_b a}) (Case 1)โB) ฮ(n^d \log n) (Case 2)โC) ฮ(n^d) (Case 3)โD) ฮ(\log n)
Answer: A
Explanation: f(n) smaller than n^{log_b a} implies root-dominated tree โ ฮ(n^{log_b a}). - Recurrence (T(n)=2T(n/2)+n\log n) yields:
A) ฮ(n\log^2 n) (since f(n)=n log n > n^{log_b a}=n by log factor)โB) ฮ(n\log n)โC) ฮ(n^2)โD) ฮ(n)
Answer: A
Explanation: Case 3 with extra polylog factor: T=ฮ(f(n)) if regularity; actually because f(n)=n log n = n^{1} log n, and a=2,b=2 โ n^{log_b a}=n, so f(n)=n log n = n^{log_b a} log n โ multiply by log n โ ฮ(n log^2 n). - For D&C algorithm that splits into 9 subproblems of size n/3 with linear combine, exponent = log_3 9 = 2 โ T(n)=ฮ(n^2). True or False?
A) TrueโB) False
Answer: A
Explanation: log_3 9 = 2, so n^2 dominates. - Search for peak (element greater than neighbors) in unimodal array using divide & conquer (binary search style) runs in:
A) O(log n)โB) O(n)โC) O(n log n)โD) O(1)
Answer: A
Explanation: Compare middle with neighbor and recurse on side with greater neighbor โ halving each step. - D&C counting number of pairs with distance โค K in 1D with sorting and recursion runs in:
A) O(n log n)โB) O(n^2)โC) O(n)โD) O(log n)
Answer: A
Explanation: Similar to inversion count with merge-like counting. - For recurrence (T(n)=T(n-1)+O(\log n)) gives complexity:
A) ฮ(n log n)โB) ฮ(n)โC) ฮ(log n)โD) ฮ(n\log\log n)
Answer: A
Explanation: Sum of log k over k up to n โ n log n. - Quickselect expected O(n) but worst-case O(n^2). Median-of-medians gives worst-case O(n) by guaranteeing pivot is at least 30th percentile. True or False?
A) TrueโB) False
Answer: A
Explanation: Deterministic selection selects pivot guaranteeing constant fraction reduction. - Dividing into three parts of size ~n/3 with recurrence (T(n)=3T(n/3)+n) yields:
A) ฮ(n log n) as n^{log_3 3}=n and f(n)=n โ case 2 โ n log nโB) ฮ(n)โC) ฮ(n^2)โD) ฮ(n^{log_3 3})
Answer: A
Explanation: a=3,b=3 โ n^{log_b a}=n; f(n)=n โ ฮ(n log n). - In D&C convex hull (divide & conquer algorithm), merging two hulls can be done in:
A) O(h1 + h2) time (linear in hull sizes)โB) O(n^2)โC) O(n log n)โD) O(1)
Answer: A
Explanation: Upper and lower tangents found by walking around hulls โ linear. - Counting inversion recurrence via merge produces T(n)=2T(n/2)+ฮ(n). Number of inversions for n distinct sorted descending is:
A) n(nโ1)/2โB) nโC) 0โD) n log n
Answer: A
Explanation: Every pair inverted in descending array โ n choose 2. - D&C algorithm that splits into 8 subproblems of size n/2 with combine O(n) yields exponent logโ 8 = 3 so complexity:
A) ฮ(n^3)โB) ฮ(n^2)โC) ฮ(n\log n)โD) ฮ(n)
Answer: A
Explanation: a=8,b=2 โ n^{log_2 8}=n^3. - Solving recurrence (T(n)=T(\sqrt{n})+1) by variable change m=log n gives depth โ log log n โ T(n)=ฮ(\log\log n). True or False?
A) TrueโB) False
Answer: A
Explanation: Repeated sqrt halves log twice each level โ log log n levels. - D&C for maximum subarray using divide and conquer gives recurrence (T(n)=2T(n/2)+ฮ(1)) if careful; complexity:
A) ฮ(n)โB) ฮ(n log n)โC) ฮ(n^2)โD) ฮ(log n)
Answer: A
Explanation: Merge step constant time if combine max suffix/prefix properly โ linear overall. - Master theorem cannot be applied when f(n) is irregular (e.g., f(n)=n^{\log_b a}ยทlog nยทlog log n). Then use recursion tree or AkraโBazzi. True or False?
A) TrueโB) False
Answer: A
Explanation: Master theorem covers many but not all f(n); advanced theorems handle irregularities. - For a recurrence (T(n)=T(n-1)+T(n-3)+ฮ(1)) (nonuniform splits), asymptotic growth is exponential since characteristic equation >1 roots. True or False?
A) TrueโB) False
Answer: A
Explanation: Recurrence with constant branching >1 generally yields exponential solutions. - In D&C, tail recursion (one recursive call last) can be optimized to iteration reducing stack. True or False?
A) TrueโB) False
Answer: A
Explanation: Tail-call elimination converts recursion to loop saving space. - For divide & conquer selection of thresholds where subproblem sizes vary (like T(n)=T(n-1)+T(1)+ฮ(n)), worst-case becomes ฮ(n^2). True or False?
A) TrueโB) False
Answer: A
Explanation: Linear-depth recursion with linear combine yields quadratic. - In a D&C for multiplying dense polynomials using FFT, complexity is ฮ(n log n). Using naive convolution it’s ฮ(n^2). True or False?
A) TrueโB) False
Answer: A
Explanation: FFT reduces convolution to pointwise multiplication in O(n log n). - Recurrence (T(n)=T(ฮฑ n)+T(ฮฒ n)+ฮ(n)) where ฮฑ+ฮฒ<1 yields T(n)=ฮ(n). True or False?
A) TrueโB) False
Answer: A
Explanation: Subproblems shrink sufficiently so linear combine across levels sums to linear. - For D&C that splits into 2 subproblems of size nโ1: (T(n)=2T(n-1)+O(1)) gives exponential ฮ(2^n). True or False?
A) TrueโB) False
Answer: A
Explanation: Each level nearly doubles number of subproblems only slightly smaller โ exponential. - For recurrence (T(n)=aT(n/b)+f(n)) if f(n)=ฮ(n^{log_b a}ยทlog^k n), then T(n)=ฮ(n^{log_b a}ยทlog^{k+1} n). True or False?
A) True (standard Master theorem case 2 variant)โB) False
Answer: A
Explanation: When f matches n^{log_b a} times polylog, multiply by extra log factor. - In recursive matrix multiplication via D&C using block multiplication into 8 blocks (standard), complexity ฮ(n^3). Using Strassen reduces constant exponent. True or False?
A) TrueโB) False
Answer: A
Explanation: Standard block multiply yields 8 mults of size n/2 โ n^3; Strassen reduces mults to 7. - D&C convex hull requires sorting points by x first: cost O(n log n) dominated by sort; hull combine linear โ total O(n log n). True or False?
A) TrueโB) False
Answer: A
Explanation: Sorting dominates; divide/merge hull linear. - For recurrence (T(n)=T(n/2)+T(n/3)+T(n/6)+ฮ(n)), since fractions sum to 1, complexity is ฮ(n log n). True or False?
A) TrueโB) False
Answer: A
Explanation: When sum of fractions =1 and combine cost ฮ(n), levels number ฮ(log n) โ n log n. - Dividing into subproblems of sizes n/2 ยฑ1 (slightly imbalanced) usually doesn’t change asymptotic complexity of balanced case. True or False?
A) TrueโB) False
Answer: A
Explanation: Constant-size imbalance doesn’t affect asymptotic exponent. - D&C algorithm to find majority element works by combining candidates from halves and verifying. Complexity O(n). True or False?
A) TrueโB) False
Answer: A
Explanation: Recurrence 2T(n/2)+O(1) โ O(n). - Recurrence (T(n)=2T(n/2)+n\log n) solved earlier yields ฮ(n log^2 n). True or False?
A) TrueโB) False
Answer: A
Explanation: f(n)=n log n = n^{log_b a} log n โ leads to extra log factor. - For integer multiplication, ToomโCook (generalization of Karatsuba) improves exponent further than Karatsuba. True or False?
A) TrueโB) False
Answer: A
Explanation: Toom-k reduces exponent closer to 1 as k increases; asymptotically FFT best. - In D&C closest pair, if points have integer coordinates with bounded range, a bucketing approach gives O(n) time, bypassing D&C. True or False?
A) TrueโB) False
Answer: A
Explanation: With bounded coordinate range you can use hashing/bucketing for linear time. - Solving recurrence (T(n)=T(n/2)+T(n/4)+T(n/8)+โฆ+O(n)) where geometric series sums less than n implies T(n)=ฮ(n). True or False?
A) TrueโB) False
Answer: A
Explanation: Total work per level is O(n) but number of levels O(log n)?? Actually subproblem sizes sum to <n at each level so tree depth constant? For this specific infinite-like sum levels constant โ overall ฮ(n). Correct intended: True. - D&C selection algorithm using median of medians splits into groups of 7 rather than 5 yields still linear time but with different constants. True or False?
A) TrueโB) False
Answer: A
Explanation: Any constant group size โฅ5 works with different constants ensuring linear worst-case. - In D&C, if combine step costs ฮ(n^2) while recursing into two halves, recurrence (T(n)=2T(n/2)+n^2) yields ฮ(n^2). True or False?
A) True (since f(n)=n^2 larger than n^{log_b a}=n) โ case 3 โ T=ฮ(n^2)โB) False
Answer: A
Explanation: f dominates so T(n)=ฮ(n^2). - For D&C to compute convex hull in higher dimensions (d >2), algorithms are more complex and asymptotically more expensive (e.g., O(n^{โd/2โ})). True or False?
A) TrueโB) False
Answer: A
Explanation: Convex hull complexity grows with dimension; algorithms become higher polynomial. - Recurrence (T(n)=T(n/2)+T(n/3)+T(n/5)+ฮ(n)) has fractional sum less than 1 โ T(n)=ฮ(n). True or False?
A) TrueโB) False
Answer: A
Explanation: If sum of sizes coefficients <1, linear-term dominates giving linear. - Merge sortโs space usage (auxiliary) is ฮ(n) for top-down, but bottom-up in-place merge variants exist requiring O(1) extra but with slower combine. True or False?
A) TrueโB) False
Answer: A
Explanation: In-place merging possible but more complicated and slower. - The Master theorem fails when recursion has non-constant number of subproblems depending on n (e.g., a(n) non-constant). True or False?
A) TrueโB) False
Answer: A
Explanation: Master theorem assumes constant a and b; variable branching needs other methods. - For (T(n)=T(n-1)+ฮ(1)) we get ฮ(n). This is a degenerate divide & conquer with one subproblem. True or False?
A) TrueโB) False
Answer: A
Explanation: Linear recursion yields linear time. - Muliplying polynomials via D&C using convolution via FFT is preferable when degree n large enough (threshold about hundreds). True or False?
A) TrueโB) False
Answer: A
Explanation: FFT overhead pays off for sufficiently large n. - In D&C, the number of leaves in recursion tree times base cost equals total leaf cost. For balanced binary recurrences with leaf cost ฮ(1), number of leaves โ n^{log_b a}. True or False?
A) TrueโB) False
Answer: A
Explanation: Leaf count approximates a^{height} where height log_b n. - The recurrence (T(n)=3T(n/4)+n) yields complexity (ฮ(n^{\log_4 3})โฮ(n^{0.793})). True or False?
A) False โ compute log_4 3โ0.79 <1 but f(n)=n larger โ case 3 โ ฮ(n).
Answer: False
Explanation: Since n^{log_b a}=n^{0.79} and f(n)=n = larger poly power, T(n)=ฮ(n). - For D&C that uses random pivot (QuickSort) average-case recurrence T(n)=2T(n/2)+ฮ(n) gives ฮ(n log n). True or False?
A) TrueโB) False
Answer: A
Explanation: Expected balanced splits lead to n log n. - D&C algorithm to compute k-th order statistic by splitting into โn chunks (median of medians variant) gives linear time too. True or False?
A) TrueโB) False
Answer: A
Explanation: Various grouping sizes yield linear time with appropriate analysis. - Recurrence (T(n)=T(\alpha n)+ฮ(n)) with ฮฑ in (0,1) gives ฮ(n). True or False?
A) TrueโB) False
Answer: A
Explanation: Shrinking subproblem plus linear combine yields linear time. - For D&C on graphs, divide by removing separator of size O(โn) (planar graphs) gives algorithms like separator-based shortest paths; complexity often improved. True or False?
A) TrueโB) False
Answer: A
Explanation: Planar separators lead to divide-and-conquer with good bounds. - In recurrence (T(n)=2T(n/2)+n/\log n) earlier we concluded ฮ(n log log n). So T(2^k) scales like 2^kยทk/ log 2^k? The asymptotic form consistent. True or False?
A) TrueโB) False
Answer: A
Explanation: Subtle polylog yields extra log log factor. - D&C convex hull naive merging yields O(n log n); incremental insertion can be worse in the worst case O(n^2). True or False?
A) TrueโB) False
Answer: A
Explanation: Incremental algorithms can degrade on adversarial input. - For D&C median-of-medians selection using groups of 3 fails to guarantee linear time (group size too small). True or False?
A) TrueโB) False
Answer: A
Explanation: Groups of 3 don’t guarantee constant fraction discarded; groups of 5 suffice. - Divide & conquer algorithms naturally parallelize because subproblems independent; parallel time reduces to depth of recursion. True or False?
A) TrueโB) False
Answer: A
Explanation: Parallelism available across subproblems; critical path equals recursion depth. - D&C solution to integer multiplication (SchonhageโStrassen) achieves near-linear time O(n log n log log n) asymptotically (using FFT over rings). True or False?
A) TrueโB) False
Answer: A
Explanation: Advanced algorithms improve exponents approaching near-linear. - For recurrence (T(n)=T(n/2)+T(n/3)+T(n/6)+1) where fractions sum to 1, with constant combine cost, T(n)=ฮ(log n) or ฮ(n)? Since combine cost constant and branching count accumulates โ O(log n)?? Actually because sum of sizes equals n, but combine cost constant, leaves number grows? For a=3 subcalls but sizes fraction sum =1, the tree height ฮ(log n) and total nodes ฮ(n)? This is subtle. Safer to avoid ambiguous question. (Skip to next.)
- (replacement) Using D&C to compute matrix chain optimal parenthesization with naive recursion is exponential, but DP reduces to O(n^3). True or False?
A) TrueโB) False
Answer: A
Explanation: Brute-force recursion enumerates exponentially many parenthesizations; DP memoizes polynomially. - In D&C, when combining solutions requires global communication (e.g., all-to-all), parallel efficiency drops. True or False?
A) TrueโB) False
Answer: A
Explanation: Heavy combine step becomes bottleneck limiting parallel speedup. - For recurrence (T(n)=aT(n/b)+f(n)), if a< b^d then f dominates and T(n)=ฮ(f(n)). True or False?
A) True (Master theorem case 3)โB) False
Answer: A
Explanation: If f(n) polynomially larger than n^{log_b a}, f dominates. - D&C for matrix inversion often uses block inversion with recursion and runs asymptotically like matrix multiplication. True or False?
A) TrueโB) False
Answer: A
Explanation: Block inversion reduces to matrix multiplications and solves with same exponent asymptotically. - Divide & conquer algorithm to compute skyline of buildings sorts by x then merges skylines in linear combine โ overall O(n log n). True or False?
A) TrueโB) False
Answer: A
Explanation: Merge skylines linear; sort dominates. - For T(n)=T(n/2)+T(n/3)+T(n/5)+n^0.5, since sum fractions <1 and f(n) sublinear, T(n)=ฮ(n). True or False?
A) TrueโB) False
Answer: A
Explanation: Shrinking splits make linear term sum to linear overall. - Divide & conquer algorithms sometimes use fallback to naive methods for small n (base case threshold) for constant factor improvement. True or False?
A) TrueโB) False
Answer: A
Explanation: Practical implementations switch to insertion sort/naive multiply for small sizes. - For recurrence (T(n)=T(n-1)+T(n-1)+O(1)=2T(n-1)+O(1)) we get ฮ(2^n). True or False?
A) TrueโB) False
Answer: A
Explanation: Each level doubles subproblems causing exponential growth. - D&C for calculating convex polygon diameter (rotating calipers) after hull computed is linear; overall after sorting hull cost still O(n log n). True or False?
A) TrueโB) False
Answer: A
Explanation: Hull O(n log n), diameter linear on hull size. - Recurrence with variable split sizes like T(n)=T(n- n^{1/2})+ฮ(1) has depth ~ n^{1/2} โ T(n)=ฮ(n^{1/2}). True or False?
A) TrueโB) False
Answer: A
Explanation: Each step reduces n by sqrt(n) roughly, giving sqrt levels. - In D&C, tail recursion optimization reduces space but not the number of operations. True or False?
A) TrueโB) False
Answer: A
Explanation: Eliminates stack overhead but same total work. - For D&C on strings, suffix array construction can be done in ฮ(n) with induced sorting or ฮ(n log n) with D&C suffix sorting (DC3 algorithm achieves linear). True or False?
A) TrueโB) False
Answer: A
Explanation: DC3 (Skew) algorithm is linear-time; simpler D&C variants are n log n. - For D&C exponentiation (binary exponentiation), recurrence T(n)=T(n/2)+O(1) gives ฮ(log n) time. True or False?
A) TrueโB) False
Answer: A
Explanation: Repeated squaring halves exponent each step. - D&C algorithm that does constant work per level and has ฮ(log n) levels yields total ฮ(log n). True or False?
A) TrueโB) False
Answer: A
Explanation: Sum over levels constant times number of levels. - In D&C, if subproblems overlap substantially, memoization (DP) may be preferable to pure recursion to avoid repeated work. True or False?
A) TrueโB) False
Answer: A
Explanation: Overlap suggests dynamic programming. - For recurrence (T(n)=2T(n/2)+\frac{n}{\log n}), earlier we got ฮ(n log log n). So it grows faster than linear but slower than n log n. True or False?
A) TrueโB) False
Answer: A
Explanation: Subtle polylog yields intermediate growth. - D&C on matrices for determinant via LU decomposition is typically O(n^3) dominated by matrix multiply; faster algorithms exist asymptotically. True or False?
A) TrueโB) False
Answer: A
Explanation: Standard Gaussian elimination cubic; fast matrix methods reduce exponent. - For divide & conquer that always halves problem but has combine cost ฮ(n^2), recurrence (T(n)=2T(n/2)+n^2) gives ฮ(n^2). True or False?
A) TrueโB) False
Answer: A
Explanation: f(n)=n^2 dominates n^{log_b a}=n, so T=ฮ(n^2). - Final conceptual: The standard template for proving correctness of a divide & conquer algorithm is: show correctness on base case, assume correctness for subproblems (induction), and prove combine yields correct global solution. True or False?
A) TrueโB) False
Answer: A
Explanation: Structural induction with combine step yields correctness proof.
