Skip to content
ExamHope Logo

Primary Menu
  • Digital Logic
    • Arithmetic Operations
    • Asynchronous/Ripple Counters
    • Basic Gates
    • Boolean Algebraic Theorems
    • Codes
  • Data Structures
    • Binary Heaps
    • Binary Search
    • Binary Search Trees
    • Binary Tree
    • Binary Tree Sort
    • Bipartite Graphs
    • Complete Graph
  • Theory of Computation
    • Finite Automata
    • Finite Automaton First Example
  • About us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • DMCA Policy
  • Home
  • IT
  • File Organization/Indexing In Database For Gate Exam
  • File Organization/Indexing
  • Database
  • IT

File Organization/Indexing In Database For Gate Exam

examhopeinfo@gmail.com October 10, 2025
File Organization Indexing

File Organization Indexing

File Organization / Indexing In Database

Q1

A block size is 4096 bytes. Each record is 128 bytes. Blocking factor (records per block) for fixed-length records (no packing) is:
A. 32
B. 31
C. 30
D. 33
✅ Answer: A
Solution: Blocking factor = floor(block size / record size) = floor(4096 / 128) = 32.


Q2

A relation has 20,000 records of length 200 bytes stored in 4 KB blocks. Records are packed (no overflow). How many blocks are needed?
A. 1000
B. 2000
C. 1250
D. 1500
✅ Answer: C
Solution: Blocking factor = floor(4096/200)=floor(20.48)=20 records/block. Blocks = ceil(20000/20)=ceil(1000)=1000. Wait check arithmetic: 4096/200 = 20.48 so BF=20, 20000/20=1000. So correct is A (1000).
Correction: ✅ Answer: A


Q3

A file uses fixed 256-byte records and a block of 4096 bytes with 128 bytes overhead per block (for header). Effective data area per block = 4096 − 128 = 3968 bytes. What is blocking factor?
A. 15
B. 16
C. 14
D. 12
✅ Answer: A
Solution: BF = floor(3968 / 256) = floor(15.5) = 15.


Q4

A file has 50,000 records and blocking factor is 25. Number of blocks =
A. 2000
B. 1250
C. 2500
D. 1500
✅ Answer: C
Solution: Blocks = ceil(50000 / 25) = 2000. Wait compute: 50000/25 = 2000 exact. So correct A (2000).
Correction: ✅ Answer: A


Q5

A sequential file is stored sorted on key K. To find a specific record by binary search on blocks (random access to block i), blocks = 1024. Worst-case number of block probes ≈
A. 10
B. 11
C. 12
D. 9
✅ Answer: B
Solution: Binary search probes ≈ ceil(log2(1024)) = ceil(10) = 10. But to account for reading record within block still one block access, so block probes = log2(1024)=10. So correct is A (10).
Correction: ✅ Answer: A


Q6

A static hash file uses 1000 buckets. Uniform distribution, 120,000 records. Average records per bucket =
A. 120
B. 12
C. 1000
D. 0.008
✅ Answer: A
Solution: 120000 / 1000 = 120 records/bucket.


Q7

In extendible hashing, directory doubles when a bucket overflows and local depth = global depth. Doubling directory occurs when:
A. Local depth < global depth B. Local depth = global depth C. Local depth > global depth
D. Never
✅ Answer: B
Solution: Directory doubles when bucket’s local depth equals global depth — so split requires doubling.


Q8

A B+ tree of order d stores up to 2d keys in an internal node. If d=50, maximum number of children per internal node is:
A. 100
B. 101
C. 102
D. 50
✅ Answer: B
Solution: Max keys = 2d =100, max children = keys + 1 = 101.


Q9

In a B+ tree of order 50, minimal number of keys in a non-root internal node is:
A. 50
B. 51
C. 25
D. 26
✅ Answer: D
Solution: Minimum keys = d = 50? Wait B+ tree of order d typically min keys = d. But some definitions: order d => between d and 2d keys. If d=50, min keys = 50. So correct A.
Correction: ✅ Answer: A


Q10

A clustered index groups records so that records with nearby keys are near on disk. Which of the following is true?
A. Only one clustered index allowed per relation
B. Multiple clustered indexes allowed
C. Clustered index keys must be unique
D. Clustered index does not affect physical ordering
✅ Answer: A
Solution: Physical ordering of records can follow at most one key ordering → only one clustered index per relation.


Q11

Given block size 4096 bytes, record size 512 bytes, file of 10,000 records. If single-level dense index stores (key, pointer) where key=8 bytes, pointer=8 bytes per entry, size of index (in blocks) ≈ ?
A. 20 blocks
B. 40 blocks
C. 80 blocks
D. 10 blocks
✅ Answer: B
Solution: Index entries = 10000, entry size =16 bytes → total =160,000 bytes. Blocks = ceil(160000 / 4096) ≈ ceil(39.06)=40 blocks.


Q12

A sparse index on a sorted file stores one entry per block. If there are 5000 blocks, number of sparse index entries =
A. 5000
B. 10000
C. 1250
D. 1
✅ Answer: A
Solution: One index entry per data block → 5000 entries.


Q13

A data file has 2,000,000 records, blocking factor=100. Number of data blocks = 20,000. A sparse index contains one entry per block. If index entry size = 16 bytes and block size = 4 KB, index occupies how many blocks?
A. 80
B. 200
C. 128
D. 78
✅ Answer: A
Solution: Entries = 20000, size = 20000×16 = 320,000 bytes. Blocks = ceil(320000/4096)=ceil(78.125)=79 blocks. Closest option 80. (Rounded)
Acceptable: A


Q14

In hashing with 512 buckets and uniform distribution, standard deviation of number of records per bucket with N records ≫ buckets approximates sqrt(N/b) . For N=51200, b=512 STD ≈
A. 10
B. 20
C. 22.6
D. 16
✅ Answer: B
Solution: N/b =100, sqrt(100)=10. Wait compute: 51200/512=100 → sqrt=10. So correct A.
Correction: ✅ Answer: A


Q15

A file uses linear hashing. Load factor threshold is 0.85. When average bucket occupancy exceeds threshold, next split happens. Linear hashing advantage over static hashing is:
A. Directory doubling
B. Gradual expansion without full rehash
C. Immediate perfect hashing
D. Fixed bucket count
✅ Answer: B
Solution: Linear hashing splits buckets incrementally — gradual expansion, no full rehash.


Q16

Given a B+ tree where leaf node capacity is 100 key-pointers, and there are 50,000 keys. Minimum number of leaf nodes =
A. 500
B. 400
C. 1000
D. 250
✅ Answer: A
Solution: Need ceil(50000/100)=ceil(500)=500 leaf nodes.


Q17

A clustered index is built on a table with frequent inserts — drawback is:
A. Fast range scans
B. Insert cost high due to page splits and physical reordering
C. Multiple clustered indexes allowed
D. No storage overhead
✅ Answer: B
Solution: Clustered physical ordering requires shifting records or page splits on insert → higher insert cost.


Q18

A dense index has an entry for every record while a sparse index has an entry per block. For equality search on primary key, cost with sparse index is:
A. Zero block reads
B. One block read (index) + one block read (data)
C. log(index) + data read
D. Only data read
✅ Answer: C
Solution: With multi-level sparse index you do index search (log index) then one data block read. For single-level sparse index on sorted file you need a binary search on index blocks then data block read: generalized as log(index)+1. So C fits.


Q19

A record pointer in an index requires 6 bytes. Key size is 8 bytes. Index entry size = 14 bytes. If block size = 4096, maximum index entries per block ≈
A. 292
B. 256
C. 170
D. 290
✅ Answer: D
Solution: 4096/14 = 292.57 → floor=292. Option D 290 close; correct is A (292).
Correction: ✅ Answer: A


Q20

A heap (unordered) file of 1M records stored in 4K blocks, BF=40. For a selection retrieving 0.1% of records using linear search, expected blocks read ≈
A. 250
B. 2500
C. 25000
D. 250000
✅ Answer: C
Solution: Records retrieved = 1,000,000×0.001=1000 records. Blocks total = 1,000,000/40=25,000 blocks. Linear search reads all blocks on average — but to find 0.1% scattered uniformly, expected blocks to read approximates all blocks? For retrieving few records you may have to scan entire file ~25,000 blocks. So C.


Q21

A file with 10,000 blocks has a two-level index: top index fits in 1 block; second-level has 200 entries per block. What is approximate number of second-level blocks?
A. 50
B. 100
C. 200
D. 10
✅ Answer: B
Solution: Second-level entries = 10000 (one per data block for sparse index). With 200 entries/block → blocks = ceil(10000/200)=50. Wait compute gives 50. So A.
Correction: ✅ Answer: A


Q22

Hash collision resolution by chaining stores overflow records in a linked list. Average search cost (successful) ≈ 1 + α/2 where α = load factor. For static hashing with α=0.8 average comparisons ≈
A. 1.4
B. 1.5
C. 1.0
D. 2.0
✅ Answer: A
Solution: 1 + α/2 = 1 + 0.8/2 = 1 + 0.4 = 1.4.


Q23

A file contains 1,000,000 records keyed by a 10-byte key, pointer 8 bytes. Dense index entry size = 18 bytes. With block size 4096, height of single-level index blocks = ceil(1000000/(4096/18)) = ? Closest:
A. 443
B. 439
C. 450
D. 500
✅ Answer: A
Solution: Entries per index block = floor(4096/18)=227. 1000000/227 ≈ 4405 blocks — that’s number of index blocks. That seems wrong; perhaps question asked for blocks not height. I’ll compute: 1000000/227 ≈ 4405 blocks. None option match. Probably option A 443 is off by factor 10. This question flawed — skip.
Correction: Replace with new Q23 below.


Q23 (replacement)

A data file has 1,000,000 records. A sparse index stores one index entry per data block. If blocking factor is 250 records/block, number of index entries = ?
A. 4000
B. 5000
C. 10000
D. 2000
✅ Answer: B
Solution: Blocks = ceil(1,000,000/250)=4000. Wait compute: 1,000,000/250=4000 → index entries=4000. Option A.
Correction: ✅ Answer: A


Q24

A B+ tree has fanout (number of pointers) of 100 at internal nodes. With 1,000,000 keys, approximate height H ≈ ? (assume full tree, leaf level holds keys). Use formula: max pointers^(H) ≥ number of leaf nodes; approximate H small. Options:
A. 2–3
B. ~4
C. ~5
D. ~6
✅ Answer: A
Solution: Fanout 100 → leaf capacity large. Roughly 100^2 = 10,000, 100^3=1,000,000. So H≈3 (root + 2 levels) → so 3 levels → option A (2–3).


Q25

A file is stored using ISAM (static indexed sequential). Insertions cause overflow chains. Major drawback is:
A. Slow sequential scans
B. Overflow chains degrade performance for inserts/reads
C. No index available
D. Directory doubles frequently
✅ Answer: B
Solution: ISAM static data pages with overflow chains cause long chains on hot buckets → degrade performance.


Q26

Given 512-byte records, 4-KB block, overhead 64 bytes, BF = floor((4096−64)/512) = ?
A. 7
B. 8
C. 6
D. 5
✅ Answer: A
Solution: Effective data area = 4032. 4032/512 = 7.875 → BF = 7.


Q27

A file uses extendible hashing with initial global depth = 2 (directory size 4). After 6 splits, minimal possible directory size is:
A. 4
B. 8
C. 16
D. 32
✅ Answer: B
Solution: Directory doubles when needed. After 6 splits, worst-case directory could be larger, but minimal possible directory size at least 4; however typical doubling can reach 4×2^k. If doubled 6 times: 4×2^6 = 256. But minimal possible after 6 splits is 4 if splits can be local. Question ambiguous. Better to reframe.
Correction: Replace with clearer Q27.


Q27 (replacement)

Extendible hashing: initial global depth = 2 (dir size 4). Directory doubles when a split requires global depth increase. If global depth increases by 3 over time, final directory size =
A. 8
B. 16
C. 32
D. 64
✅ Answer: B
Solution: Initial size 2^2 =4. After increasing global depth by 3: new depth = 5 ⇒ directory size = 2^5 = 32. Wait compute: 2^5 =32 option C. My options mismatch. Correct answer C.
Correction: ✅ Answer: C


Q28

A sequential file with 10,000 sorted records and blocking factor 50. For range query retrieving keys between K1 and K2 covering 200 records, expected number of block reads ≈
A. 4
B. 8
C. 200
D. 5
✅ Answer: B
Solution: 200 records / BF 50 = 4 blocks. Plus perhaps one block to find start via index or binary search; if assumption is direct range read number ≈ 4. Nearest option 4 is A.
Correction: ✅ Answer: A


Q29

A hash file uses 100 buckets and uniform distribution. For 10,000 records average overflow chain length ≈
A. 100
B. 10
C. 1
D. 0.1
✅ Answer: B
Solution: 10000/100 = 100 records per bucket. If bucket capacity one block? But average chain length is records per bucket divided by bucket capacity; ambiguous. But straightforward average records/bucket = 100. Option A.
Correction: ✅ Answer: A


Q30

The blocking factor of variable-length records can be maximized by:
A. Using record packing (slotted pages)
B. Using fixed-length padding
C. Using larger pointers
D. Increasing free space per block
✅ Answer: A
Solution: Packing variable records to fill unused space (slotted page with compaction) maximizes BF.


Q31

A multi-level index reduces index search time. If top index has 1 block, second level has 100 blocks, third level has 10 blocks — worst-case index lookup block probes =
A. 1
B. 3
C. 10
D. 111
✅ Answer: B
Solution: Each level requires reading one block → total levels = 3 → 3 block reads.


Q32

A hash file with 2048 buckets uses linear probing. Primary disadvantage vs chaining:
A. Extra pointers required
B. Clustering (primary clustering) causing long probes
C. Extra directory size
D. Complex rehashing
✅ Answer: B
Solution: Linear probing suffers primary clustering — long probe sequences.


Q33

A B+ tree leaf can hold 160 entries. A table has 2,560,000 records. Minimum leaf nodes needed =
A. 16000
B. 1600
C. 160000
D. 1600000
✅ Answer: A
Solution: Ceil(2560000 / 160) = 16000.


Q34

Which file organization is best for range queries?
A. Hashing
B. Heap file
C. Sorted/Sequential or B+ tree clustered index
D. Random file
✅ Answer: C
Solution: Sorted data or B+ tree supports efficient range queries.


Q35

A dense index on non-unique key requires:
A. Multiple pointers per key value
B. Single pointer per key value
C. No pointers
D. Only sequential scan
✅ Answer: A
Solution: Non-unique keys map to multiple records → dense index needs list/pointers to all records.


Q36

Given average seek time is 8 ms and transfer time per block is 1 ms, random access to 1 block cost ≈
A. 9 ms
B. 8 ms
C. 1 ms
D. 7 ms
✅ Answer: A
Solution: Seek + rotational + transfer approximated as 8 + 1 = 9 ms (ignoring rotation).


Q37

A file contains 4000 blocks. An index stores one entry per block. Index entry size = 10 bytes. Blocksize 4 KB. Index occupies how many blocks?
A. 1
B. 10
C. 10? compute: entries=4000 size=40000 bytes blocks=10 → so B.
✅ Answer: B
Solution: 4000×10=40000 bytes /4096 ≈ 9.77 → ceil=10 blocks.


Q38

In hashed file with dynamic hashing, primary advantage over static hashing is:
A. Smaller directory always
B. Avoids overflow chains under growth by splitting buckets
C. No collisions ever
D. Requires no reorganization ever
✅ Answer: B
Solution: Dynamic hashing (extendible/linear) splits buckets incrementally to avoid long overflow chains.


Q39

Record length = 120 bytes, block size 4096, header 96 bytes. Maximum records per block = floor((4096−96)/120) = ?
A. 33
B. 33 (4000/120=33.33) floor=33 → A.
✅ Answer: A
Solution: Effective 4000/120=33.33 → 33 records/block.


Q40

A file is stored as sorted on attribute A. Multi-attribute query on A and B (B unsorted) with selectivity high on A benefits most from:
A. Hash index on B
B. Clustered index on A
C. Heap file
D. No index
✅ Answer: B
Solution: Sorting/clustering on A gives fast retrieval when A is selective.


Q41

If a block contains 200 bytes of data usable and records are variable average 50 bytes, expected records per block ≈
A. 4
B. 3
C. 2
D. 5
✅ Answer: A
Solution: 200/50 = 4.


Q42

A primary index on a file stored sequentially with MF=1000 blocks: cost to find a record using 2-level index with root in one block and second-level index of size 50 blocks plus one data block read =
A. 3 block reads
B. 2 block reads
C. 52 block reads
D. 1 block read
✅ Answer: A
Solution: Root (1) + second-level binary search (1 block) + data block (1) = 3 block reads.


Q43

Which index is best for point query (equality) on a high-cardinality attribute?
A. Hash index
B. B+ tree for range queries
C. Bitmap index
D. Clustered index
✅ Answer: A
Solution: Hash index gives O(1) bucket lookup for equality on high-cardinality attribute.


Q44

If index entries are 16 bytes and leaf node can hold 256 entries, leaf node size ~4096 bytes (16×256=4096). To store 1M index entries (dense), leaf nodes required =
A. 3907
B. 4000
C. 1000
D. 5000
✅ Answer: A
Solution: 1,000,000 / 256 = 3906.25 → ceil=3907 leaf nodes.


Q45

A file uses blocking factor 64. 50,000 records stored. If one block access costs 5 ms, average time to find a specific record via linear search ≈ reading half file blocks = (50000/64)/2 *5ms = ?
A. ~1953 ms
B. ~3906 ms
C. ~976 ms
D. ~500 ms
✅ Answer: A
Solution: Blocks total = 50000/64=781.25→782. Half ≈391 blocks ×5ms ≈1955ms so option A ~1953 ms close.


Q46

Hash file with 1024 buckets and 204,800 records. If each bucket fits 200 records per block, average number of overflow blocks per bucket =
A. 0
B. 1
C. 0.2
D. 100
✅ Answer: B
Solution: Records/bucket = 204800/1024 = 200 records/bucket. If bucket capacity 200 per block → ideally 1 block per bucket, overflow 0. So correct A (0). But option A earlier 0.
Correction: ✅ Answer: A


Q47

A B+ tree leaf split increases height when root splits. If fanout is 100, approximate max keys covered by height 3 (levels = root, internal, leaf) ≈
A. 10^6
B. 10^4
C. 10^3
D. 10^5
✅ Answer: A
Solution: Each level multiplies by ~100 child pointers: 100^3 = 1,000,000 ≈ 10^6.


Q48

Bitmap index is most effective when:
A. Column has high cardinality
B. Column has low cardinality
C. Data is clustered
D. Only for numeric columns
✅ Answer: B
Solution: Bitmap indexes efficient for low-cardinality attributes (gender, status).


Q49

Given average seek = 6 ms, rotation+latency = 4 ms, transfer per block = 0.5 ms. Random block read time ≈
A. 10.5 ms
B. 6.5 ms
C. 4.5 ms
D. 0.5 ms
✅ Answer: A
Solution: Total ≈ seek + rotation/latency + transfer = 6 +4 +0.5 =10.5 ms.


Q50

A file stored in external memory has 10^8 records. Using two-level B+ tree index reduces search I/O dramatically because:
A. Indices fit in main memory lowering I/O to leaf accesses only
B. It eliminates need for any I/O
C. It converts to hashing
D. None
✅ Answer: A
Solution: Top levels of B+ tree (internal nodes) are small and often fit in memory, so search requires only leaf block accesses (and maybe a few node reads) → fewer disk I/Os.

Q51

Block size = 4096 bytes. Record length = 350 bytes. Header per block = 96 bytes. What is blocking factor (BF)?
A. 11
B. 10
C. 9
D. 12
✅ Answer: B
Solution: Effective space = 4096 − 96 = 4000 bytes. BF = floor(4000 / 350) = floor(11.428) = 11. Wait: 11.428 floor = 11 → so A.
Correction: ✅ Answer: A


Q52

A file has 2,000,000 records; record size = 200 bytes; block size = 4096; BF = floor(4096/200)=20. Number of blocks = ?
A. 10000
B. 100000
C. 1000000
D. 20000
✅ Answer: D
Solution: Blocks = ceil(2,000,000 / 20) = 100,000. Wait compute: 2,000,000/20 = 100,000 → option B.
Correction: ✅ Answer: B


Q53

A sparse index stores one entry per block. Data file has 50,000 blocks. Index entry size = 16 bytes, block size = 4096. Index blocks required =
A. 200
B. 50
C. 10
D. 100
✅ Answer: A
Solution: Entries = 50,000, size = 50,000×16 = 800,000 bytes → blocks = ceil(800000/4096) ≈ ceil(195.31)=196 → nearest 200.


Q54

A hash file with 4096 buckets holds 819,200 records. Average records per bucket =
A. 200
B. 100
C. 50
D. 400
✅ Answer: A
Solution: 819,200 / 4096 = 200 records per bucket.


Q55

In extendible hashing, if a bucket with local depth 3 splits but global depth = 4, directory doubling is:
A. Required
B. Not required
C. Always required
D. Impossible
✅ Answer: B
Solution: Since local depth 3 < global depth 4, splitting can be handled by updating directory pointers without doubling.


Q56

A B+ tree internal node can hold up to 128 pointers. Minimum pointers in non-root internal node (assuming B+ tree rules) ≈
A. 64
B. 32
C. 128
D. 16
✅ Answer: A
Solution: Min pointers ≈ ceil(max/2) = ceil(128/2)=64.


Q57

A file with blocking factor 80 and 2,560,000 records requires how many blocks?
A. 32,000
B. 3,200
C. 25,600
D. 20,000
✅ Answer: A
Solution: Blocks = ceil(2,560,000 / 80) = 32,000.


Q58

A dense index on a 500,000-record file with 20-byte entries and 4KB blocks requires how many index blocks? (approx)
A. 3,000
B. 2,500
C. 2,000
D. 4,000
✅ Answer: B
Solution: Entries per block = floor(4096/20)=204. Blocks = ceil(500,000 / 204) ≈ 2451 → approx 2,500.


Q59

A linear hashing scheme with 1024 initial buckets and split pointer at 700, current bucket count logically ≈
A. 1024 + 700 = 1724
B. 1024
C. 700
D. 2048
✅ Answer: A
Solution: In linear hashing, logical bucket count = initial + split pointer (number of splits) = 1024 + 700 = 1724.


Q60

A B+ tree leaf node can hold 128 entries. For 1,280,000 keys, minimum leaf nodes =
A. 10,000
B. 1000
C. 1280
D. 5000
✅ Answer: A
Solution: Leaf nodes = ceil(1,280,000 / 128) = 10,000.


Q61

A table has 1,000,000 records. Blocking factor = 200. A two-level sparse index with top index fitting in one block. How many second-level index blocks needed (approx)?
A. 5000
B. 4000
C. 2000
D. 10000
✅ Answer: B
Solution: Data blocks = 1,000,000/200 = 5000 blocks. If sparse index has one entry per data block and each second-level block holds 128 entries → blocks = ceil(5000/128) ≈ 40. Wait options not matching. But question likely meant second-level = data-blocks / entries/second-level-block. Since options inconsistent, choose nearest interpret: if second-level entries per block = 200? Not stated. — This question ambiguous; skip.

Replacement Q61 (clear):
A file has 200,000 records, BF=250 → data blocks = 800. A sparse second-level index holds 100 entries per block. Number of index blocks =
A. 8
B. 80
C. 2
D. 16
✅ Answer: A
Solution: Data blocks = ceil(200,000/250) = 800. Index blocks = ceil(800/100) = 8.


Q62

A heap file of 1,500,000 records BF=150. A search for a random record by scanning requires on average how many block reads?
A. 5,000
B. 10,000
C. 7,500
D. 1,000
✅ Answer: A
Solution: Total blocks = 1,500,000 / 150 = 10,000 blocks. Average scanned to find random record ≈ half file = 5000 blocks.


Q63

A clustered index on primary key results in which of the following?
A. Only one row per block
B. Fast range queries and potential high insert cost
C. No effect on update cost
D. Multiple physical orderings allowed
✅ Answer: B
Solution: Clustered index stores records in key order → good for range queries, but inserts may cause page splits.


Q64

Hash file uses chaining. With load factor α = 1.2 average successful search cost (in buckets touched) ≈ 1 + α/2 =
A. 1.6
B. 1.5
C. 1.2
D. 2.4
✅ Answer: A
Solution: 1 + (1.2)/2 = 1 + 0.6 = 1.6.


Q65

A file has 80,000 blocks. A two-level index has root (1 block) pointing to 2000 second-level blocks. Each second-level block points to 40 data blocks. Total data blocks addressed =
A. 80,000
B. 80,000? compute: 2000*40 = 80,000 → A.
C. 20,000
D. 100,000
✅ Answer: A
Solution: 2000 × 40 = 80,000, matching file size.


Q66

In ISAM, overflow chaining on heavy insert workload leads to:
A. Faster inserts than hashing
B. Longer lookup times due to chains
C. Automatic re-distribution of data pages
D. Directory doubling
✅ Answer: B
Solution: Overflows create chains; searches may traverse chains → slower lookups.


Q67

A block can contain 256 index entries. If index has 512,000 entries, number of blocks =
A. 2000
B. 1000
C. 512
D. 4000
✅ Answer: A
Solution: 512,000 / 256 = 2000 blocks.


Q68

A file with variable-length records uses slotted-page structure. Benefits include:
A. Fixed BF always larger
B. Easy record insertion and deletion with compaction
C. No need for pointers
D. No overhead
✅ Answer: B
Solution: Slotted pages maintain offsets and free space enabling compaction and variable-length support.


Q69

A hash file with 2048 buckets and 409,600 records gives average records per bucket =
A. 200
B. 100
C. 50
D. 400
✅ Answer: A
Solution: 409,600/2048 = 200.


Q70

For a B+ tree of order m, maximum keys per internal node = 2m, maximum children = 2m +1. If m=64, max children = ?
A. 129
B. 128
C. 64
D. 130
✅ Answer: A
Solution: 2m + 1 = 128 +1 = 129.


Q71

A file of 1,000,000 records uses blocking factor 125 → data blocks = 8000. Single-level sparse index storing 1 entry per block, index entry size 12 bytes. Index blocks = ceil(8000×12 / 4096) ≈
A. 24
B. 23
C. 25
D. 22
✅ Answer: B
Solution: Total index size = 8000×12 = 96,000 bytes. Blocks = ceil(96000/4096)=ceil(23.44)=24 → option A.
Correction: ✅ Answer: A


Q72

A hash bucket holds 400 bytes per bucket and each record is 100 bytes. With chaining and no overflow, records per bucket =
A. 4
B. 3
C. 5
D. 2
✅ Answer: A
Solution: 400/100 = 4 records fit per bucket.


Q73

A file with 10,000 blocks uses a two-level index. Top-level (root) is one block; second-level has 125 blocks. Block probes for lookup = root + one second-level block + data block =
A. 3
B. 2
C. 126
D. 127
✅ Answer: A
Solution: Reading one block at each level: 3 probes.


Q74

In linear hashing, primary advantage is:
A. Fixed directory in memory only
B. Incremental growth avoiding full rehashing
C. Always perfect hashing distribution
D. No overflow chains ever
✅ Answer: B
Solution: Linear hashing splits buckets incrementally — no global rehash required.


Q75

An index node stores 120 pointers and 119 keys; pointer size = 8 bytes, key size = 8 bytes. Node size ≈ 119×8 + 120×8 = 239×8 = 1912 bytes. If block size 4KB, node fits into single block. True or false?
A. True
B. False
C. Depends on pointer overhead
D. Only if compressed
✅ Answer: A
Solution: Node size 1912 < 4096 → fits one block.


Q76

A file of 3,000,000 records has BF=150 → data blocks ≈ 20,000. Primary sparse index stores one entry per block; index entry size = 20 bytes. Index size in blocks ≈ ceil(20000×20 / 4096) ≈
A. 98
B. 100
C. 92
D. 120
✅ Answer: A
Solution: 20000×20=400,000 bytes. 400000/4096 ≈ 97.65 → ceil=98.


Q77

Bitmap index is ideal for which scenario?
A. High-cardinality numeric attribute
B. Low-cardinality attribute like gender
C. Primary key indexing
D. Clustered range queries
✅ Answer: B
Solution: Bitmap indexes are compact and fast for low-cardinality attributes.


Q78

A hash table with 2048 buckets and 512,000 records — expected average bucket occupancy = 250. If bucket capacity is 128 records per data bucket, average overflow blocks per bucket ≈
A. 1
B. 2
C. 0.95
D. 0
✅ Answer: B
Solution: 250/128 ≈ 1.95 → average overflow ≈ 1.95 → ~2 overflow blocks per bucket.


Q79

A B+ tree of order 50 has leaf capacity 100 entries. For 5 million entries, approximate height (levels including root) ≈
A. 3–4
B. 5–6
C. 10
D. 2
✅ Answer: A
Solution: Leaf nodes = 5,000,000 / 100 = 50,000 leaves. Fanout ~100 (pointers per internal). 100^2 = 10,000, 100^3 = 1,000,000; so ~3 levels (root + 2 internal + leaf) → height ≈ 3–4.


Q80

Given seek = 6 ms, average rotation/latency = 3 ms, transfer per block = 0.4 ms. Read 10 random blocks costs ≈
A. ~94 ms
B. ~100 ms
C. ~50 ms
D. ~10 ms
✅ Answer: A
Solution: One random read ≈ 6+3+0.4=9.4 ms. Ten reads ≈ 94 ms.


Q81

A file with 8000 blocks has a sparse index of 200 entries per block. Index blocks = ceil(8000/200)=40. Height of a two-level index with root=1 would be:
A. 3 (root + second + data)
B. 2
C. 4
D. 1
✅ Answer: A
Solution: Levels: root (1 block) + second-level (40 blocks) + data blocks → 3-level lookup.


Q82

Record size 120 bytes, block size 4096, header 64. BF = floor((4096−64)/120) = floor(4032/120) = 33. So BF=33. Which option?
A. 33
B. 32
C. 34
D. 31
✅ Answer: A
Solution: 4032/120 = 33.6 → floor=33.


Q83

Which file organization gives best performance for equality queries on non-key attribute where many duplicates exist?
A. Hashing on attribute
B. Bitmap index on attribute
C. Clustered B+ tree on attribute
D. Heap file
✅ Answer: B
Solution: Bitmap index enables fast bitwise operations for attributes with many duplicates.


Q84

A dense index with 2,000,000 entries and entry size 16 bytes requires index storage of ≈
A. 32 MB
B. 16 MB
C. 64 MB
D. 8 MB
✅ Answer: A
Solution: 2,000,000 × 16 = 32,000,000 bytes ≈ 32 MB.


Q85

In extendible hashing, local depth of bucket = 3 implies bucket addresses share how many prefix bits?
A. 3 bits
B. 1 bit
C. 8 bits
D. 0 bits
✅ Answer: A
Solution: Local depth = number of hash prefix bits that distinguish records in bucket → 3 bits.


Q86

A file with BF=100 and 1,000,000 records: data blocks = 10,000. If index has one entry per block and index entry size = 12 bytes, index size in blocks = ceil(10000×12/4096) ≈
A. 30
B. 29
C. 28
D. 32
✅ Answer: B
Solution: 10000×12 = 120,000 bytes → 120000/4096 ≈ 29.29 → ceil=30 → option A.
Correction: ✅ Answer: A


Q87

A sequential file sorted on key K and a sparse index with one entry per block: to find an arbitrary key, cost ≈
A. log2(index blocks) + 1 block read
B. 1 block read always
C. index blocks only
D. entire file scan
✅ Answer: A
Solution: Binary search on index (log2 of index blocks) then read data block → log2(indexBlocks) +1.


Q88

A B+ tree leaf node split moves half entries to new leaf. If leaf capacity is 200 entries and a new insert triggers split, resulting two leaf sizes ≈
A. 100 and 100
B. 200 and 0
C. 150 and 50
D. 99 and 101
✅ Answer: A
Solution: Split halves entries ≈ equal (100 each).


Q89

When reorganizing a file to reduce overflow chains in ISAM, the operation primarily does:
A. Rebuild data file and recreate index
B. Compress index only
C. Change hash function
D. Only delete deleted records
✅ Answer: A
Solution: Reorganization rebuilds data pages and indexes to eliminate overflow chains.


Q90

A key-pointer pair uses 16 bytes. Leaf node capacity 256 entries. Number of keys storable per leaf = 256. For a database with 2,560,000 keys, number of leaf nodes =
A. 10,000
B. 20,000
C. 5,000
D. 100,000
✅ Answer: A
Solution: 2,560,000 / 256 = 10,000.


Q91

A dynamic hashing scheme (extendible) directory size is 2^g where g = global depth. If g=6 directory size =
A. 64
B. 32
C. 128
D. 16
✅ Answer: A
Solution: 2^6 = 64.


Q92

A file has average record length 500 bytes, block size 4096, header 96. BF = floor((4096−96)/500) = floor(4000/500) = 8. Data blocks for 800,000 records =
A. 100,000
B. 10,000
C. 80,000
D. 20,000
✅ Answer: C
Solution: Blocks = ceil(800,000 / 8) = 100,000. Wait compute: 800,000/8=100,000. Option A.
Correction: ✅ Answer: A


Q93

Which index type can support prefix matching and range scanning efficiently?
A. Hash index
B. Bitmap index
C. B+ tree index
D. No index
✅ Answer: C
Solution: B+ trees maintain ordering → good for prefix and range queries.


Q94

If average transfer time per block is 0.3 ms and seek+latency average is 9.7 ms, sequential read of 100 contiguous blocks costs ≈
A. 39.7 ms
B. 1000 ms
C. 100 ms
D. 1300 ms
✅ Answer: A
Solution: First block: seek+latency+transfer ≈ 9.7 + 0.3 = 10 ms. Subsequent blocks mostly transfer (0.3 ms each). Total ≈ 10 + 99*0.3 = 10 + 29.7 = 39.7 ms.


Q95

A dense index on a 10 million record file with 16-byte entries requires storage ≈
A. 160 MB
B. 16 MB
C. 1.6 GB
D. 64 MB
✅ Answer: A
Solution: 10,000,000 × 16 bytes = 160,000,000 bytes ≈ 160 MB.


Q96

A hash table with 4096 buckets and 819,200 records. If bucket capacity is 200 records per block, number of blocks per bucket = 1, overflow=0. So total blocks for data ≈ 4096. True or false?
A. True
B. False
C. Depends on distribution
D. Only if chaining used
✅ Answer: A
Solution: 819,200/200 = 4096 blocks total → 1 block per bucket.


Q97

In a disk with transfer rate 100 MB/s and block size 8 KB, transfer time per block ≈
A. 0.08 ms
B. 0.8 ms
C. 8 ms
D. 80 ms
✅ Answer: B
Solution: 8 KB = 0.008 MB. Time = 0.008 / 100 s = 8e-5 s = 0.08 ms. Wait compute: 0.008MB/100MB/s = 0.00008 s = 0.08 ms. So A.
Correction: ✅ Answer: A


Q98

A file has 2500 blocks; each block contains 100 records. A sparse index stores one entry every block. Index entry=12 bytes. If a block holds 400 index entries, index blocks needed =
A. 7
B. 8
C. 6
D. 10
✅ Answer: A
Solution: Entries=2500; entries per index block=400; blocks=ceil(2500/400)=ceil(6.25)=7.


Q99

Which organization best reduces disk seeks for large sequential scans?
A. Heap file
B. Random hash file
C. Clustered sequential file or clustered B+ tree
D. Multi-level hash
✅ Answer: C
Solution: Clustered/sequential files store related records contiguously → minimal seeks for scans.


Q100

A file uses bucketed hashing with 1024 buckets, each bucket block holds 128 records. For 200,000 records total, expected number of bucket blocks occupied =
A. 1563
B. 1000
C. 2000
D. 512
✅ Answer: A
Solution: Records per bucket = 200,000/1024 ≈ 195.31 → blocks per bucket ≈ ceil(195.31/128) = 2 blocks for most buckets. Total blocks ≈ 1024 × 2 = 2048. But some buckets may be 1 block; approximation 1563 not matching. Better compute total blocks = ceil(200,000 / 128) = ceil(1562.5)=1563 → A.

Q101

Block size = 4096 bytes; record size = 180 bytes; header = 56 bytes. Find blocking factor (BF).
A. 21 B. 22 C. 23 D. 20
✅ Answer: B
Solution: Effective space = 4096 – 56 = 4040; BF = ⌊4040 / 180⌋ = 22. So 22 records fit per block.


Q102

A file has 60,000 records, BF = 30. How many data blocks?
A. 1800 B. 2000 C. 1500 D. 2100
✅ Answer: B
Solution: Blocks = ⌈60,000 / 30⌉ = 2000.


Q103

Hash table with 512 buckets stores 102,400 records. Average records per bucket = ?
A. 200 B. 150 C. 250 D. 100
✅ Answer: A
Solution: 102,400 / 512 = 200 records per bucket.


Q104

In extendible hashing, directory size = 2^g entries. If there are 128 entries, global depth (g) = ?
A. 6 B. 7 C. 8 D. 5
✅ Answer: B
Solution: 2^7 = 128 → g = 7.


Q105

A B+ tree leaf node holds 120 records. File has 1,200,000 records. How many leaf nodes?
A. 10,000 B. 1000 C. 12,000 D. 1200
✅ Answer: A
Solution: 1,200,000 / 120 = 10,000 leaves.


Q106

If average fan-out = 100 and 10,000 leaf nodes, approx levels = ?
A. 2 B. 3 C. 4 D. 5
✅ Answer: B
Solution: 100² = 10,000 → 2 levels below root → height ≈ 3 levels total.


Q107

Bitmap index most useful when:
A. Attribute has low cardinality
B. Attribute is numeric and unique
C. Records are variable length
D. Data is heavily updated
✅ Answer: A
Solution: Bitmap indexes excel for low-cardinality columns like gender, status etc.


Q108

A file of size 2 GB, block = 4 KB. How many blocks does it contain?
A. 524,288 B. 512,000 C. 256,000 D. 128,000
✅ Answer: A
Solution: 2 GB = 2 × 1024 MB = 2048 MB = 2,097,152 KB; / 4 KB = 524,288 blocks.


Q109

A sparse index stores one entry per data block. If data file has 524,288 blocks and each index entry is 8 bytes, index size ≈ ?
A. 4 MB B. 2 MB C. 8 MB D. 1 MB
✅ Answer: A
Solution: 524,288 × 8 = 4,194,304 bytes ≈ 4 MB.


Q110

For a dense index on 10 million records, entry size = 12 bytes. Storage ≈ ?
A. 120 MB B. 12 MB C. 1.2 GB D. 24 MB
✅ Answer: A
Solution: 10⁷ × 12 = 120 MB.


Q111

A heap file with 10,000 blocks: average search cost for a record = ?
A. 5000 blocks B. 10000 C. 1000 D. 500
✅ Answer: A
Solution: Average ≈ half the file scan → 5000 blocks.


Q112

A B+ tree node supports order m = 64. Maximum keys in internal node = ?
A. 127 B. 128 C. 64 D. 63
✅ Answer: A
Solution: Internal node has up to 2m keys = 128, but if definition uses 2m pointers → keys = 2m − 1 = 127.


Q113

Which is true for ISAM?
A. Dynamic growth of directory
B. Overflow chains can form
C. Automatic bucket splits
D. Requires frequent re-hash
✅ Answer: B
Solution: ISAM creates overflow chains for new records after initial load.


Q114

Record = 400 B, block = 4096 B, header = 96 B. Blocking factor = ?
A. 9 B. 10 C. 8 D. 11
✅ Answer: A
Solution: (4096 – 96)/400 = 10; floor = 10 → so B actually correct. ✅ Answer: B.


Q115

Hash file with load factor α = 0.8. Expected unsuccessful search cost ≈ 1 / (1 – α) = ?
A. 5 B. 2.5 C. 3 D. 1.25
✅ Answer: B
Solution: 1 / (1 – 0.8) = 1 / 0.2 = 5. → Option A. ✅ Answer: A.


Q116

A file with blocking factor = 50, records = 250,000. How many blocks?
A. 5000 B. 4000 C. 4500 D. 5500
✅ Answer: A
Solution: 250,000 / 50 = 5000 blocks.


Q117

Sequential file is best for:
A. Batch processing and range queries
B. Random updates
C. Hash lookups
D. Real-time insertions
✅ Answer: A
Solution: Sequential access supports range and batch processing.


Q118

Local depth = 2, global depth = 3 in extendible hashing. If bucket splits, will directory double?
A. Yes B. No C. Always D. Never
✅ Answer: B
Solution: Split handled without doubling since local < global.


Q119

Average record size = 240 B, block = 4096 B, header = 96 B. BF = ?
A. 16 B. 17 C. 18 D. 15
✅ Answer: B
Solution: (4096 – 96)/240 = 4000/240 ≈ 16.66 → floor = 16 → A. ✅ Answer: A.


Q120

If a dense index on a sorted file is built, primary key duplicates are allowed?
A. No B. Yes, with record pointers to list
C. Yes, without pointers
D. Only for non-key fields
✅ Answer: B
Solution: Dense index may include duplicate keys with pointers to record chains.


Q121

Hash function maps 2000 records into 100 buckets. Average bucket load = ?
A. 10 B. 20 C. 25 D. 15
✅ Answer: B
Solution: 2000 / 100 = 20.


Q122

Which index type supports multi-key search best?
A. Bitmap index
B. B+ tree composite index
C. Hash index
D. Sparse index
✅ Answer: B
Solution: Composite B+ tree handles multi-key range queries.


Q123

Seek = 5 ms, rotation = 3 ms, transfer = 0.5 ms. Random read of 20 blocks ≈ ?
A. 170 ms B. 160 ms C. 200 ms D. 150 ms
✅ Answer: A
Solution: (5 + 3 + 0.5) × 20 = 8.5 × 20 = 170 ms.


Q124

Clustered index arranges records:
A. Randomly
B. Physically ordered by index key
C. Only logically ordered
D. Hashed
✅ Answer: B
Solution: Clustered index stores records in key order.


Q125

Unclustered index lookup on 10,000 matching records may cause many I/Os because:
A. Each record requires random I/O
B. Blocks are sorted
C. Index entries are grouped
D. Directory is small
✅ Answer: A
Solution: Unclustered records are scattered → many random I/Os.


Q126

A record pointer is 8 bytes. Key = 12 bytes. Index entry = 20 bytes. Block = 4096 B. Entries per block ≈ ?
A. 204 B. 200 C. 190 D. 180
✅ Answer: B
Solution: 4096 / 20 = 204 entries → A.


Q127

In B+ tree, leaf nodes are:
A. Linked sequentially
B. Unlinked
C. Stored in hash buckets
D. Never used for range queries
✅ Answer: A
Solution: Leaves are linked for fast sequential access.


Q128

Overflow chaining reduces performance in:
A. ISAM
B. Extendible hashing
C. Linear hashing
D. B+ tree
✅ Answer: A
Solution: ISAM uses overflow chains → longer lookup paths.


Q129

Average records per block (BF) = 100, records = 50,000. Blocks = ?
A. 500 B. 1000 C. 2000 D. 400
✅ Answer: B
Solution: 50,000 / 100 = 500 blocks → A. ✅ Answer: A.


Q130

Sequential file is inefficient for random updates because:
A. Blocks are not indexed
B. Records must remain in sorted order
C. No header record
D. Hashing is used
✅ Answer: B
Solution: Maintaining sorted order makes random insert/update costly.


Q131

Hash collision resolution by open addressing suffers from:
A. Clustering
B. Overflow chains
C. Separate index blocks
D. Directory doubling
✅ Answer: A
Solution: Linear probing → primary clustering.


Q132

A file has records of 300 B each; block = 4096 B; header = 96 B. BF = ?
A. 13 B. 14 C. 15 D. 12
✅ Answer: B
Solution: (4096 – 96)/300 = 4000/300 ≈ 13.33 → 13 records → A. ✅ Answer: A.


Q133

If hash file load factor α > 1, then:
A. No collisions
B. Overflows occur
C. Hash table empty
D. Directory shrinks
✅ Answer: B
Solution: α > 1 means records > buckets → overflows.


Q134

B+ tree height reduces when:
A. Order increases
B. Keys decrease
C. Splits occur
D. Merges occur
✅ Answer: A
Solution: Higher order (fan-out) → fewer levels.


Q135

Hash table with n records and m buckets. Load factor = ?
A. m/n B. n/m C. n × m D. n − m
✅ Answer: B
Solution: α = n/m.


Q136

In extendible hashing, global depth = d implies directory size = ?
A. 2ᵈ B. d² C. 2d D. 2ᵈ⁻¹
✅ Answer: A


Q137

A file of 100 MB uses 4 KB blocks. Total blocks ≈ ?
A. 25,000 B. 24,000 C. 26,000 D. 20,000
✅ Answer: A
Solution: 100 × 1024 KB / 4 = 25,600 ≈ 25k.


Q138

Average access time improves most with:
A. Clustered index
B. Heap file
C. Random hash
D. Overflow chains
✅ Answer: A


Q139

B+ tree supports range queries better than hash because:
A. It preserves key order
B. It requires less space
C. It has no overflow
D. Hash uses buckets
✅ Answer: A


Q140

Heap file insert cost = ?
A. O(1) B. O(log n) C. O(n) D. O(n log n)
✅ Answer: A
Solution: Append record to end → constant time.


Q141

Sequential file update needs reorganization because:
A. Inserted records disturb order
B. Blocks too large
C. Index too small
D. No hashing
✅ Answer: A


**

Q142**
B+ tree with order 3: each node can have max keys = ?
A. 3 B. 6 C. 5 D. 7
✅ Answer: C
Solution: 2m – 1 = 5.


Q143

In linear hashing, bucket split policy depends on:
A. Load factor
B. Global depth
C. Overflow chain length
D. Block header size
✅ Answer: A


Q144

Bitmap index size = number of distinct values × number of records / 8 bits. If 1 million records, 10 values → ?
A. 1.25 MB B. 1 MB C. 10 MB D. 0.5 MB
✅ Answer: C
Solution: (10 × 1,000,000)/8 = 1,250,000 bytes = 10 Mb = 1.25 MB → A.


Q145

Index helps reduce:
A. Access time
B. Storage space
C. Hash collisions
D. Overflow chains
✅ Answer: A


Q146

Bucket directory doubles when global depth increases by:
A. 1 B. 2 C. log n D. 0.5
✅ Answer: A


Q147

In B+ tree, internal nodes store:
A. Keys and child pointers
B. Records
C. Data only
D. None
✅ Answer: A


Q148

If file has 4096 blocks and index points to every 64th block, index size = ?
A. 64 entries B. 128 C. 64 blocks D. 64 KB
✅ Answer: B
Solution: 4096 / 64 = 64 entries.


Q149

Which file organization minimizes disk seeks for range queries?
A. Sequential
B. Hash
C. Heap
D. ISAM
✅ Answer: A


Q150

When overflow area grows large in ISAM:
A. Reorganization is needed
B. Performance improves
C. Directory expands
D. B+ tree height increases
✅ Answer: A

Q151.

A file contains 50,000 records, each 100 bytes. Block size = 1000 bytes. Find the blocking factor (bfr).
A. 10 B. 5 C. 20 D. 50
✅ Answer: A
Solution:
Blocking factor = floor(block size / record size) = floor(1000 / 100) = 10.


Q152.

If blocking factor = 10 and file has 50,000 records, how many blocks are required?
A. 5000 B. 2000 C. 50 D. 10,000
✅ Answer: A
Solution:
Blocks = ceiling(records / bfr) = ceiling(50,000 / 10) = 5000 blocks.


Q153.

Which file organization provides fastest equality search on primary key?
A. Heap B. Sequential C. Hash D. B+ Tree
✅ Answer: C
Solution:
Hashing gives O(1) average lookup on key.


Q154.

For a sequential file, insertion of new record requires:
A. Finding correct place and shifting records
B. Adding at end
C. Rebuilding index
D. Creating overflow area
✅ Answer: A
Solution:
Records are ordered; new insertion needs maintaining sort order.


Q155.

In spanned blocking, a record may:
A. Span across two blocks
B. Occupy only one block
C. Be divided into indexes
D. Require linked overflow blocks
✅ Answer: A
Solution:
Spanned records can continue into next block if not enough space.


Q156.

For unspanned blocking, record size = 200 bytes, block size = 1024 bytes. Blocking factor = ?
A. 5 B. 4 C. 6 D. 3
✅ Answer: B
Solution:
floor(1024/200) = 5.12 → only 5 full records fit → 5.


Q157.

Which organization needs directory doubling when overflow occurs?
A. Linear hashing B. Extendible hashing C. ISAM D. B-tree
✅ Answer: B
Solution:
Extendible hashing uses directory doubling when global depth increases.


Q158.

A B+ Tree of order 4 can have a maximum of how many keys per node?
A. 4 B. 3 C. 7 D. 8
✅ Answer: C
Solution:
Max keys = 2 * order – 1 = 2 × 4 – 1 = 7.


Q159.

Which of the following is NOT a characteristic of heap file?
A. Fast insertion
B. Ordered access
C. Slow search
D. Direct append
✅ Answer: B
Solution:
Heap files are unordered, so ordered access isn’t possible.


Q160.

If each bucket holds 10 records, and hash file has 500 records, how many buckets are needed?
A. 50 B. 25 C. 100 D. 10
✅ Answer: A
Solution:
500 / 10 = 50 buckets.


Q161.

In a hash-based file, bucket overflow can be minimized by:
A. Increasing bucket size
B. Better hash function
C. Using overflow chaining
D. All of these
✅ Answer: D
Solution:
All methods reduce overflow likelihood.


Q162.

Which index type stores pointers to data records for every search key value?
A. Dense index B. Sparse index C. Clustering index D. Multi-level index
✅ Answer: A
Solution:
Dense index has an entry for each record.


Q163.

What is the access time for a record in sequential file with 1000 blocks (average seek = 4ms, transfer = 1ms)?
A. 5ms B. 6ms C. 4ms D. 2ms
✅ Answer: A
Solution:
Average = seek + transfer = 5ms.


Q164.

B+ Tree leaves are linked sequentially to support:
A. Range queries
B. Hash search
C. Clustering
D. Insertions
✅ Answer: A
Solution:
Leaf-level linkage allows efficient range scanning.


Q165.

Which structure combines hashing and B+ trees?
A. Hybrid index B. Dynamic hashing C. Extendible B-tree D. Grid file
✅ Answer: D
Solution:
Grid files combine hash-based partitioning with tree indexing.


Q166.

A RAID-5 system with 4 disks stores parity on:
A. All disks B. One disk only C. Two disks D. Cache
✅ Answer: A
Solution:
RAID-5 distributes parity blocks across all disks.


Q167.

For a file of 1000 records, 10 per block, sequential search requires on average how many block accesses?
A. 500 B. 1000 C. 50 D. 10
✅ Answer: A
Solution:
Average = n/2 blocks = 100 blocks/2 = 500 accesses.


Q168.

Clustering index is created on:
A. Non-key attribute with sorted data file
B. Primary key
C. Heap file
D. Hash attribute
✅ Answer: A
Solution:
Clustering index organizes data physically on a non-key attribute.


Q169.

In ISAM, overflow chains are used to handle:
A. Insertions
B. Deletions
C. Index reorganization
D. Updates
✅ Answer: A
Solution:
Insertions causing full blocks go to overflow area.


Q170.

B+ Trees maintain balance by:
A. Splitting and merging nodes
B. Sequential scanning
C. Directory doubling
D. None
✅ Answer: A
Solution:
Balancing ensures height difference ≤ 1 using splits/merges.


Q171.

Which organization gives the fastest range query?
A. Hash B. B+ Tree C. Heap D. ISAM
✅ Answer: B
Solution:
B+ Tree leaves are sequentially linked, perfect for range lookups.


Q172.

The blocking factor depends on:
A. Record size and block size
B. Hash function
C. Number of buckets
D. Access path
✅ Answer: A
Solution:
bfr = floor(block size / record size).


Q173.

File with 10,000 records and blocking factor 20 ⇒ how many blocks?
A. 500 B. 1000 C. 400 D. 200
✅ Answer: B
Solution:
10,000 / 20 = 500 blocks.


Q174.

Which of the following is true for multi-level index?
A. Speeds up access time
B. Increases storage cost
C. Improves range queries
D. All of these
✅ Answer: D
Solution:
Multi-level indexes add overhead but greatly reduce search cost.


Q175.

Which file organization is best for static data?
A. ISAM B. Hash C. Sequential D. Heap
✅ Answer: A
Solution:
ISAM is optimal for static datasets since reorganization is costly.


Q176.

File reorganization is needed when:
A. Too many overflow records exist
B. Data size shrinks
C. Index becomes inconsistent
D. All of these
✅ Answer: D


Q177.

A dense index stores:
A. One entry per record
B. One entry per block
C. One entry per key range
D. None
✅ Answer: A


Q178.

Which index type allows only one search key per file?
A. Primary index B. Secondary index C. Dense index D. Clustered index
✅ Answer: A
Solution:
Primary index is built on the file’s primary key only.


Q179.

Unspanned records waste space due to:
A. Fragmentation
B. Overflow
C. Collisions
D. Indexing
✅ Answer: A


Q180.

RAID-1 mirrors data for:
A. Fault tolerance
B. Higher throughput
C. Compression
D. Reorganization
✅ Answer: A


Q181.

In a hash file with 16 buckets, hash(key) = key mod 16. Key = 51 → bucket = ?
A. 3 B. 4 C. 2 D. 15
✅ Answer: C
Solution:
51 mod 16 = 3.


Q182.

In B+ tree, deletion of key causes:
A. Merge if underflow
B. Split
C. Overflow
D. Rehash
✅ Answer: A


Q183.

If seek = 4 ms and transfer = 2 ms, random read ≈ ?
A. 6 ms B. 8 ms C. 10 ms D. 4 ms
✅ Answer: A


Q184.

A file with 5 levels of B+ index → total block accesses for key search = ?
A. 6 B. 5 C. 4 D. 7
✅ Answer: A
Solution:
Level traversals (5) + 1 data access = 6.


Q185.

Which improves sequential file performance?
A. Prefetching buffers
B. Random access
C. Bucket chaining
D. Compression
✅ Answer: A


Q186.

In B+ tree, order increases ⇒
A. Tree height decreases
B. Tree height increases
C. No change
D. Unbalanced
✅ Answer: A


Q187.

Which organization allows direct access without index?
A. Hash B. Sequential C. Heap D. Clustered
✅ Answer: A


Q188.

Dense vs Sparse index — Sparse index is smaller because:
A. One entry per block
B. One entry per record
C. Fixed key size
D. Compression
✅ Answer: A


Q189.

What does buffer replacement policy decide?
A. Which block to remove from buffer
B. Which disk to access
C. Which record to hash
D. None
✅ Answer: A


Q190.

In RAID-0, data is:
A. Striped across disks
B. Mirrored
C. Parity stored
D. Encoded
✅ Answer: A


Q191.

Which file structure best supports frequent insertions/deletions?
A. Heap B. Sequential C. Hash D. B+ Tree
✅ Answer: A


Q192.

Seek time dominates when:
A. Records spread across disks
B. Data in cache
C. Using SSD
D. Buffer full
✅ Answer: A


Q193.

The term “blocking factor” refers to:
A. Number of records per block
B. Number of blocks per file
C. Index entries
D. Page hits
✅ Answer: A


Q194.

Which method ensures minimal disk I/O during join operations?
A. Clustering
B. Hashing
C. Sequential scan
D. Heap access
✅ Answer: A


Q195.

Record length = 200 bytes, block = 2048 bytes, unspanned → max records/block = ?
A. 10 B. 9 C. 8 D. 7
✅ Answer: B
Solution: floor(2048/200) = 10 records fit fully.


Q196.

If a block can hold 10 records, how many blocks needed for 870 records?
A. 87 B. 88 C. 86 D. 89
✅ Answer: B
Solution: ceil(870/10) = 88 blocks.


Q197.

Which of these is a secondary storage structure?
A. RAID B. Cache C. Register D. RAM
✅ Answer: A


Q198.

Sequential files are inefficient for:
A. Random search
B. Sorted traversal
C. Batch updates
D. Range queries
✅ Answer: A


Q199.

In B+ tree, internal node key repetition occurs because:
A. Keys propagated upwards
B. Duplicate values
C. Hash function
D. Overflow
✅ Answer: A


Q200.

Which organization maintains separate areas for data and metadata (directory)?
A. Indexed B. Sequential C. Heap D. Hash
✅ Answer: A
Solution:
Indexed files maintain directory (index) separate from data blocks.

About the Author

examhopeinfo@gmail.com

Administrator

Visit Website View All Posts

Post navigation

Previous: Deadlock in Database MCQs For Gate Exam
Next: Entity Relationship Diagram-ER Diagram MCQs For Gate Exam

Related News

Understanding the Role of the Lexical Analyzer
  • Role of the Lexical Analyzer
  • Compiler Design
  • IT

Lexical Analysis — Understanding the Role of the Lexical Analyzer

examhopeinfo@gmail.com December 5, 2025
Parsing A Simple Ompass Compiler
  • IT
  • Compiler Design
  • Parsing

Parsing — A Simple Onepass Compiler

examhopeinfo@gmail.com December 4, 2025
Syntax-directed translation A Simple Ompass Compiler
  • IT
  • Compiler Design
  • syntax-directed translation

A Simple Ompass Cempiler — Syntax-directed translation

examhopeinfo@gmail.com December 4, 2025

Recent Posts

  • Lexical Analysis — Understanding the Role of the Lexical Analyzer
  • Parsing — A Simple Onepass Compiler
  • A Simple Ompass Cempiler — Syntax-directed translation
  • A Simple Ompass Compiler — Syntax definition
  • Decidability: Countable Sets (The Halting Problem Revisited)

Archives

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024

You may have missed

Understanding the Role of the Lexical Analyzer
  • Role of the Lexical Analyzer
  • Compiler Design
  • IT

Lexical Analysis — Understanding the Role of the Lexical Analyzer

examhopeinfo@gmail.com December 5, 2025
Parsing A Simple Ompass Compiler
  • IT
  • Compiler Design
  • Parsing

Parsing — A Simple Onepass Compiler

examhopeinfo@gmail.com December 4, 2025
Syntax-directed translation A Simple Ompass Compiler
  • IT
  • Compiler Design
  • syntax-directed translation

A Simple Ompass Cempiler — Syntax-directed translation

examhopeinfo@gmail.com December 4, 2025
A Simple Ompass Compiler
  • IT
  • A Simple Ompass Compiler
  • Compiler Design

A Simple Ompass Compiler — Syntax definition

examhopeinfo@gmail.com December 4, 2025

At ExamHope, we understand that preparing for exams can be challenging, overwhelming, and sometimes stressful. That’s why we are dedicated to providing high-quality educational resources, tips, and guidance to help students and aspirants achieve their goals with confidence. Whether you are preparing for competitive exams, school tests, or professional certifications, ExamHope is here to make your learning journey smarter, easier, and more effective.

Quick links

  • About us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
  • DMCA Policy

Category

  • Computer Network
  • Computer Organization and Architecture
  • Data Structures
  • C Language
  • Theory of Computation
  • Database
Copyright © All rights reserved for ExamHope. | MoreNews by AF themes.