What Is Cache Memory In Computer?

Unstop
8 min readApr 11, 2022

--

Table of content:

  • Types of cache memory
  • How does cache memory work?
  • Cache vs RAM
  • Cache vs Virtual Memory
  • Levels of Cache Memory
  • Cache Mapping

Cache Memory is a very high-speed memory that is used for synchronizing with the CPU and to speed up the processes. The cache is comparatively costlier than normal disk memory but is more economical than CPU registers. These memories are extremely fast memory and their main function is to act as a buffer between RAM and CPU. It holds data and instructions that are frequently requested so that they are immediately available to the CPU when needed. The term ‘cache’ primarily refers to a thing that is hidden or stored somewhere, or the place where it is hidden.

Cache memory is a temporary memory officially termed as ‘CPU cache Memory’. Cache memory in your computer is mostly built on top of the processor/CPU chip itself for faster access of data.

ALSO READ

Top 10 Algorithms Every Coding Student Should Know To Crack Competitive Interviews

Types of cache memory

Cache memory is broadly classified as follows:

  • Primary Cache: The primary cache is located on the processor chip. The cache size is very small and its access time is almost similar to processor registers.
  • Secondary Cache: Secondary cache is placed in the middle of the primary cache and the main memory of the system. It is referred to as L2 (level 2) cache. Most of the time the L2 cache is housed on the processor chip itself but might be located on a separate chip. L1, L2, and L3 cache are discussed below in this article.

How does cache memory work?

When the CPU requests data or data is required by the CPU, it will automatically turn to the cache memory for accessing data fast and economically. Cache storage is a temporary storage like RAM. This is required because system RAM is slower and is a bit further away from the CPU, and all of this has a huge impact on performance and latency when data needs to be accessed frequently. When data is found in cache memory, this is called a cache hit.

Hit ratio = hit / (hit + miss) = no.

A cache hit enables the processor to retrieve data quickly, making your overall system more efficient and faster.

Also, cache memory is very much smaller than system RAM and it can only store data temporarily, so it may not hold the information that the processor needs sometimes. When the cache does not have the processor’s required data, this is called a cache miss, and in this case, the CPU has to move onto the hard drive and use RAM.

ALSO READ

What is Coding & Decoding? | Coding-Decoding Questions

Cache vs RAM

RAM(Random Access Memory)/Main Memory or also called primary memory is a volatile memory with a larger size that can store the data as long as the power is supplied. It is comparatively costlier than cache memory because it is based on high-speed SRAM random access memory which is faster, expensive, and less dense. It is a slower memory compared to cache. But RAM is faster than a hard disk, floppy disk, compact disk, or just any form of secondary memory storage media. CPU reads Cache Memory data before reading RAM and if data is unavailable in cache it impacts CPU performance. A plus point of RAMs is they can be installed both internally and externally.

On the other hand, the size of cache memory is very less compared to RAM memory and it holds data that the CPU needs to access frequently. It is very fast compared to RAM. Cache memory is basically used for faster access time of the data, as the CPU first accesses cache memory before turning towards RAM.

As we all know, RAM is also the main memory of the computer, and all of the functions are executed from RAM itself, but these days super-fast SSDs are capable of memory swapping therefore these ROM memories are also becoming capable of supporting functions when the system experiences RAM shortage.

Cache vs Virtual Memory

Virtual memory is basically nothing but a logical unit of computer memory that increases the capacity of main memory by executing programs of larger size than the main memory in the computer system. A part of virtual memory is discussed above. Virtual memory comparatively has a very large size compared to cache memory. Although virtual memory is helpful, it is not defined as an actual memory unit. Also, it is not a high-speed memory like the cache memory. The virtual memory is helpful in the scenarios when we need to execute programs that may not completely be placed in the main memory. The virtual memory is not hardware controlled but instead controlled by the Operating System (OS). It requires a mapping structure to map the virtual address with a physical address.

ALSO READ

Understanding Difference Between OLAP and OLTP

Levels of Cache Memory

Computer systems these days have more than one level of cache memory, and they are variable in size and proximity to the processor cores and processing speed. They are also called cache levels. Let’s have a look at types of cache memory :

Level 1: L1 cache is a type of memory that’s built into the CPU chip itself. It’s typically the fastest cache memory and consumes a limited amount of space. In most modern CPUs, it’s divided into two parts: the data section and the instruction section. Since it’s very fast, the L1 cache is the first place a processor will look for traces of data or instructions that have been buffered in RAM. In most modern CPUs, the data section and the instruction section are divided into two. The modern CPU typically has a cache size of 32 KB.

Level 2: L2 memory cache can also be found in the CPU’s cache but might be located on a separate chip close to the CPU. L2 cache size is typically larger than the L1 cache and can be found in the order of 256K bytes per core.

Level 3: Level 3 cache is typically much larger cache than L1 or L2 cache, but it differs in another significant manner. Unlike the L1 and L2 caches, which are private to each CPU core, the L3 cache is often shared by all cores. As a result, it can play a crucial role in data exchange and inter-core communication. The L3 cache per core could be on a scale of 2 MB cache.

Cache Mapping

As previously stated, cache memory is incredibly fast — meaning it can be read from very quickly. However, there is a bottleneck: data must first be located before it can be read from cache memory. The CPU is aware of the data or instruction address in RAM memory that it wishes to read. It must search the memory cache for a reference to that RAM memory location in the memory cache, as well as the data or instruction associated with it.

Data or instructions from RAM can be mapped into memory cache in a variety of ways, each of which has a direct impact on the speed with which they can be found. However, there is a cost: reducing the amount of time spent on searching also minimizes the likelihood of a cache hit, while maximizing the chances of a cache hit maximizes the search time.

The common cache mapping methods are:

1. Direct Mapping

With a direct-mapped cache, a given block of RAM data can only be stored in one location in cache memory. This means that the CPU only needs to check one location in the memory cache to determine if the data or instructions it seeks are present, and if they are, they will be discovered quickly. The disadvantage of direct-mapped cache is that it significantly restricts the types of data or instructions that can be placed in the memory cache, making cache hits infrequent.

2. Associative Mapping

This is the polar opposite of direct mapping and is also known as fully associated mapping. Any block of data or instructions from RAM can be placed in any cache memory block using an associative mapping mechanism. That means the CPU must explore the full cache memory to check if it contains the information it seeks, but the chances of finding it are significantly higher.

3. Set-Associative Mapping

Set associative mapping, which allows a block of RAM to be mapped to a limited number of different memory cache blocks, is a compromise between the two methods of mapping. A RAM block can be placed in one of two locations in cache memory using a 2-way associative mapping method. An 8-way associative mapping scheme, on the other hand, would allow a RAM block to be placed in any of the cache memory blocks. Because the CPU has to seek in two places instead of just one, a 2-way system takes twice as long to search as a direct-mapped system, but there is a significantly higher possibility of a cache hit.

ALSO READ

A Quick Review Of The Cloud Computing Applications

Summing Up

We can conclude that presence of cache memory improves computer in terms of speed and performance of computers as they are able to access data much faster compared to RAM and Local Disk. 6MB cache is a decent size but we can conclude that 8MB is much better. Hence, the more the better as cache is the fastest memory in a computer. The larger the size the more efficient and faster the CPU is. However, it must be noted that there is no impact on CPU clock speed.

You may also like to read:

  1. What Is Paging In Operating System?
  2. Difference Between Hardware and Software
  3. 10 Most Important Programming Language for AI
  4. Data Redundancy in DBMS

--

--

Unstop

Unstop (formerly Dare2Compete) enables companies to engage with candidates in the most interactive way to discover, assess, and hire the best talent.