Skip to content
ExamHope Logo

examhope

Primary Menu
  • Digital Logic
    • Arithmetic Operations
    • Asynchronous/Ripple Counters
    • Basic Gates
    • Boolean Algebraic Theorems
    • Codes
  • Data Structures
    • Binary Heaps
    • Binary Search
    • Binary Search Trees
    • Binary Tree
    • Binary Tree Sort
    • Bipartite Graphs
    • Complete Graph
  • Theory of Computation
    • Finite Automata
    • Finite Automaton First Example
  • Current Affairs
    • Sports News
    • Tech News
    • Bollywood News
    • Daily News
  • Database
  • Computer Network
  • Computer Organization and Architecture
  • C Language
  • Operating Systems
  • Software Engineering
  • Theory of Computation
  • About us
  • Contact Us
  • Privacy Policy
  • DMCA Policy
  • Terms and Conditions
  • Home
  • IT
  • Computer Organization and Architecture
  • Mapping functions — Memory Organization
  • Mapping functions
  • Computer Organization and Architecture

Mapping functions — Memory Organization

examhopeinfo@gmail.com November 11, 2025 4 minutes read
Mapping functions — Memory Organization

Mapping functions — Memory Organization

What Are Memory Mapping Functions?

Imagine your computer as a huge library.
You (the CPU) want to find a particular book (data). But this library is massive — thousands of shelves (main memory).

So, instead of wandering around every time, you have a small book rack near your desk (that’s your cache memory) where you keep the books you use most often.

Now, here’s the question:
How do you decide which books from the library go on your small desk rack?
And when your rack is full, where should each new book be placed?

That’s exactly what memory mapping functions are all about!
They define how blocks of data from the main memory are placed into cache memory.


🧩 Why Do We Need Mapping?

Cache memory is much smaller than main memory.
So we can’t just copy everything — we have to choose which part of memory goes where in the cache.

This process of matching main memory blocks to cache blocks is called cache mapping or memory mapping.

It’s like assigning parking spots to cars — every car (memory block) needs to know where to park in the limited parking area (cache).


🧠 Basic Idea Before We Begin

Let’s assume:

  • Main memory is divided into blocks.
  • Cache memory is divided into lines (each line stores one block at a time).

When the CPU needs data:

  1. It looks in the cache first (quick check!).
  2. If the data is there → Cache hit ✅
  3. If not → Cache miss ❌ and the data is brought from main memory into cache.

Now, let’s see how main memory blocks find their place inside the cache — through three mapping techniques.


🧩 Types of Memory Mapping Functions

1️⃣ Direct Mapping

This is the simplest method.
Here, each block of main memory can go into only one specific cache line.

Think of it like assigned seating in a classroom 🎒:
Every student (memory block) has exactly one seat (cache line). No one else can sit there.


⚙️ How It Works

Let’s say:

  • Cache has 8 lines.
  • Main memory has 32 blocks.

Then:

Main Memory Block 0 → Cache Line 0
Main Memory Block 1 → Cache Line 1
...
Main Memory Block 8 → Cache Line 0 again
Main Memory Block 9 → Cache Line 1 again

So blocks 0, 8, 16, 24 all share the same cache line (because 8 lines can’t hold all 32 blocks).


📘 Formula:

Cache line number = (Main memory block number) MOD (Number of cache lines)

🧩 Diagram — Direct Mapping

        +---------------------+
        |       CPU           |
        +----------+----------+
                   |
           +---------------+
           |     CACHE      |
           +---------------+
           | Line 0  ← Block 0, 8, 16... |
           | Line 1  ← Block 1, 9, 17... |
           | Line 2  ← Block 2, 10, 18...|
           +---------------+
                   |
           +---------------+
           |   MAIN MEMORY  |
           | Block 0 - 31   |
           +---------------+

💡 Pros:

  • Simple and fast to locate.
  • Easy to implement.

⚠️ Cons:

  • High chance of collisions (different blocks mapping to the same line).
  • Frequent replacements if those blocks are accessed repeatedly.

2️⃣ Associative Mapping

Now imagine a classroom where there are no assigned seats.
Any student can sit anywhere — total freedom! 🎓

That’s what happens in associative mapping.

Here, any block of main memory can go into any cache line.
There’s no fixed position — the cache just picks an empty line.


⚙️ How It Works

When the CPU looks for data:

  • It checks all cache lines to see if the block is there.
  • To do this quickly, each line has a tag — a small identifier that tells which memory block it holds.

🧩 Diagram — Associative Mapping

        +---------------------+
        |        CPU          |
        +----------+----------+
                   |
           +----------------+
           |      CACHE     |
           +----------------+
           | Line 0 → Tag 15 |
           | Line 1 → Tag 22 |
           | Line 2 → Tag 03 |
           +----------------+
                   |
           +----------------+
           |  MAIN MEMORY   |
           | Block 0 - 31   |
           +----------------+

💡 Pros:

  • No mapping conflicts.
  • Any block can go anywhere — maximum flexibility.

⚠️ Cons:

  • Needs complex hardware to search all tags quickly.
  • Slower lookup compared to direct mapping (because we must check every tag).

3️⃣ Set-Associative Mapping

This one is a blend of the first two methods — a smart middle ground. ⚖️

Here, the cache is divided into sets, and each set contains a few lines.
Each block of main memory maps to one set, but can go into any line within that set.

Think of it like a parking lot divided into rows:

  • Each row (set) has multiple parking spots (lines).
  • A car (memory block) can park in any spot within its assigned row.

⚙️ How It Works

For example:

  • Cache has 8 lines.
  • Divided into 4 sets → each set has 2 lines.
  • Main memory has 32 blocks.

Then:

Set number = (Block number) MOD (Number of sets)

If block 5 maps to set 1, it can go into either of the 2 lines in that set.


🧩 Diagram — Set-Associative Mapping

        +---------------------+
        |        CPU          |
        +----------+----------+
                   |
           +----------------------+
           |        CACHE         |
           +----------------------+
           | Set 0 → Line 0,1     |
           | Set 1 → Line 2,3     |
           | Set 2 → Line 4,5     |
           | Set 3 → Line 6,7     |
           +----------------------+
                   |
           +----------------------+
           |      MAIN MEMORY     |
           | Blocks 0 - 31        |
           +----------------------+

💡 Pros:

  • Reduces collision problems (better than direct mapping).
  • Easier to search than fully associative mapping.

⚠️ Cons:

  • More complex than direct mapping.
  • Slightly slower than direct mapping due to searching within a set.

🧾 Quick Comparison Table

Mapping TypeWhere a Block Can GoSpeedHardware ComplexityExample Analogy
DirectOnly one specific lineVery FastSimpleAssigned seat in class
AssociativeAny lineSlowerComplexSit anywhere
Set-AssociativeAny line within a setMediumModerateChoose any seat in your row

About the Author

examhopeinfo@gmail.com

Administrator

Visit Website View All Posts

Post navigation

Previous: Cache Memory — Memory Organization
Next: Write Policies — Memory Organization

Related News

Cache Coherency — Parallel Processors
  • Cache Coherency
  • Computer Organization and Architecture

Cache Coherency — Parallel Processors

examhopeinfo@gmail.com November 11, 2025 0
Shared Memory Multiprocessors
  • Shared Memory Multiprocessors
  • Computer Organization and Architecture

Shared Memory Multiprocessors — Parallel Processors

examhopeinfo@gmail.com November 11, 2025 0
parallel processors
  • parallel processors
  • Computer Organization and Architecture

Introduction to parallel processors

examhopeinfo@gmail.com November 11, 2025 0

Recent Posts

  • Vivo X200: जाने कितनी कम कीमत पर मिल रहा ये 9400 मिडिया टेक प्रोसेसर वाला स्मार्टफोन
  • Samsung Galaxy S25 Plus पर मिल रही भारी छूट ,जाने सेल प्राइस
  • AI के इस ज़माने में कैसे बिजली बचा रहे हैं यह स्मार्ट प्लग?
  • क्या है यह GhostPairing Scam और बिना पासवर्ड और सिम के क्यों हो रहा है व्हाट्सप्प अकाउंट हैक
  • Leica कैमरे के साथ जल्द लॉन्च हो सकता है Xiaomi Ultra 17

At ExamHope, we understand that preparing for exams can be challenging, overwhelming, and sometimes stressful. That’s why we are dedicated to providing high-quality educational resources, tips, and guidance to help students and aspirants achieve their goals with confidence. Whether you are preparing for competitive exams, school tests, or professional certifications, ExamHope is here to make your learning journey smarter, easier, and more effective.

Quick links

  • About us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
  • DMCA Policy

Category

  • Computer Network
  • Computer Organization and Architecture
  • Data Structures
  • C Language
  • Theory of Computation
  • Database

You may have missed

Vivo X200 Price Drop
  • IT
  • Current Affairs
  • Tech News

Vivo X200: जाने कितनी कम कीमत पर मिल रहा ये 9400 मिडिया टेक प्रोसेसर वाला स्मार्टफोन

examhopeinfo@gmail.com December 23, 2025 0
Samsung Galaxy S25 Plus
  • IT
  • Current Affairs
  • Tech News

Samsung Galaxy S25 Plus पर मिल रही भारी छूट ,जाने सेल प्राइस

examhopeinfo@gmail.com December 22, 2025 0
Electricity bill saving Smart Plug
  • IT
  • Current Affairs
  • Tech News

AI के इस ज़माने में कैसे बिजली बचा रहे हैं यह स्मार्ट प्लग?

examhopeinfo@gmail.com December 21, 2025 0
Ghost Pairing Scam on Whatsapp
  • IT
  • Current Affairs
  • Tech News

क्या है यह GhostPairing Scam और बिना पासवर्ड और सिम के क्यों हो रहा है व्हाट्सप्प अकाउंट हैक

examhopeinfo@gmail.com December 21, 2025 0
Copyright © All rights reserved for ExamHope. | MoreNews by AF themes.
Go to mobile version