What Are Memory Mapping Functions?
Imagine your computer as a huge library.
You (the CPU) want to find a particular book (data). But this library is massive — thousands of shelves (main memory).
So, instead of wandering around every time, you have a small book rack near your desk (that’s your cache memory) where you keep the books you use most often.
Now, here’s the question:
How do you decide which books from the library go on your small desk rack?
And when your rack is full, where should each new book be placed?
That’s exactly what memory mapping functions are all about!
They define how blocks of data from the main memory are placed into cache memory.
🧩 Why Do We Need Mapping?
Cache memory is much smaller than main memory.
So we can’t just copy everything — we have to choose which part of memory goes where in the cache.
This process of matching main memory blocks to cache blocks is called cache mapping or memory mapping.
It’s like assigning parking spots to cars — every car (memory block) needs to know where to park in the limited parking area (cache).
🧠 Basic Idea Before We Begin
Let’s assume:
- Main memory is divided into blocks.
- Cache memory is divided into lines (each line stores one block at a time).
When the CPU needs data:
- It looks in the cache first (quick check!).
- If the data is there → Cache hit ✅
- If not → Cache miss ❌ and the data is brought from main memory into cache.
Now, let’s see how main memory blocks find their place inside the cache — through three mapping techniques.
🧩 Types of Memory Mapping Functions
1️⃣ Direct Mapping
This is the simplest method.
Here, each block of main memory can go into only one specific cache line.
Think of it like assigned seating in a classroom 🎒:
Every student (memory block) has exactly one seat (cache line). No one else can sit there.
⚙️ How It Works
Let’s say:
- Cache has 8 lines.
- Main memory has 32 blocks.
Then:
Main Memory Block 0 → Cache Line 0
Main Memory Block 1 → Cache Line 1
...
Main Memory Block 8 → Cache Line 0 again
Main Memory Block 9 → Cache Line 1 again
So blocks 0, 8, 16, 24 all share the same cache line (because 8 lines can’t hold all 32 blocks).
📘 Formula:
Cache line number = (Main memory block number) MOD (Number of cache lines)
🧩 Diagram — Direct Mapping
+---------------------+
| CPU |
+----------+----------+
|
+---------------+
| CACHE |
+---------------+
| Line 0 ← Block 0, 8, 16... |
| Line 1 ← Block 1, 9, 17... |
| Line 2 ← Block 2, 10, 18...|
+---------------+
|
+---------------+
| MAIN MEMORY |
| Block 0 - 31 |
+---------------+
💡 Pros:
- Simple and fast to locate.
- Easy to implement.
⚠️ Cons:
- High chance of collisions (different blocks mapping to the same line).
- Frequent replacements if those blocks are accessed repeatedly.
2️⃣ Associative Mapping
Now imagine a classroom where there are no assigned seats.
Any student can sit anywhere — total freedom! 🎓
That’s what happens in associative mapping.
Here, any block of main memory can go into any cache line.
There’s no fixed position — the cache just picks an empty line.
⚙️ How It Works
When the CPU looks for data:
- It checks all cache lines to see if the block is there.
- To do this quickly, each line has a tag — a small identifier that tells which memory block it holds.
🧩 Diagram — Associative Mapping
+---------------------+
| CPU |
+----------+----------+
|
+----------------+
| CACHE |
+----------------+
| Line 0 → Tag 15 |
| Line 1 → Tag 22 |
| Line 2 → Tag 03 |
+----------------+
|
+----------------+
| MAIN MEMORY |
| Block 0 - 31 |
+----------------+
💡 Pros:
- No mapping conflicts.
- Any block can go anywhere — maximum flexibility.
⚠️ Cons:
- Needs complex hardware to search all tags quickly.
- Slower lookup compared to direct mapping (because we must check every tag).
3️⃣ Set-Associative Mapping
This one is a blend of the first two methods — a smart middle ground. ⚖️
Here, the cache is divided into sets, and each set contains a few lines.
Each block of main memory maps to one set, but can go into any line within that set.
Think of it like a parking lot divided into rows:
- Each row (set) has multiple parking spots (lines).
- A car (memory block) can park in any spot within its assigned row.
⚙️ How It Works
For example:
- Cache has 8 lines.
- Divided into 4 sets → each set has 2 lines.
- Main memory has 32 blocks.
Then:
Set number = (Block number) MOD (Number of sets)
If block 5 maps to set 1, it can go into either of the 2 lines in that set.
🧩 Diagram — Set-Associative Mapping
+---------------------+
| CPU |
+----------+----------+
|
+----------------------+
| CACHE |
+----------------------+
| Set 0 → Line 0,1 |
| Set 1 → Line 2,3 |
| Set 2 → Line 4,5 |
| Set 3 → Line 6,7 |
+----------------------+
|
+----------------------+
| MAIN MEMORY |
| Blocks 0 - 31 |
+----------------------+
💡 Pros:
- Reduces collision problems (better than direct mapping).
- Easier to search than fully associative mapping.
⚠️ Cons:
- More complex than direct mapping.
- Slightly slower than direct mapping due to searching within a set.
🧾 Quick Comparison Table
| Mapping Type | Where a Block Can Go | Speed | Hardware Complexity | Example Analogy |
|---|---|---|---|---|
| Direct | Only one specific line | Very Fast | Simple | Assigned seat in class |
| Associative | Any line | Slower | Complex | Sit anywhere |
| Set-Associative | Any line within a set | Medium | Moderate | Choose any seat in your row |