Friday, July 23, 2010
12:42 AM

Cache Memory



In this article we will look into what a cache is, the need and understand the working.

In the memory hierarchy the cache memory comes next to the processor registers. 

Look at the following figure, assume it takes 10 time units for data to travel from the memory (RAM or the physical memory) to the processor, i.e if the processor requests for some data it will take 10 time units for it to reach the processor.

Note: The time units are just for explanation and do not reflect the actual time the Memory takes to produce the data



Let us assume that the processor requests for “Data1” from the memory which will reach the processor after 10 time units. Once the processor has finished working on “Data1” in the above setup it will have to send it back to the memory as processors need their registers to be free for further processing.
After a few instructions if the processor needs “Data1” again, the same time has to be spent all over again to fetch the data.

According to studies on various programs, it has been proven that often a data accessed once by the processor is required again very soon. This concept is termed as temporal locality.
That is if “Data1” is needed once, it will again be needed very soon.

Another observation made is that if one data is accessed soon the data around it is also required.
That is if “Data1” is accessed, soon “Data2”, “Data3” etc that is the data in memory location around “Data1” will also be accessed. This concept is called as spatial locality.

Over past few years the processor speeds have increased tremendously, but the speed at which the memory works has not kept up in the pace, as a result even though the processor is capable of working faster it ends up waiting for the data to arrive from the memory.
The cache is one way of making the memory work a little faster.


RAM being a slower memory will always take a lot of time in processing the data. Thus cache was introduced

  1. To take care of the slow memory access time of the RAM
  2. To take advantage of temporal and spatial locality.





As shown in the figure above , the processor comes in between the RAM and the processor. The technology with which the cache is built makes the cache work faster than the RAM, thus it takes lesser amount of time for the data to reach processor from the cache than from the RAM.
Lets say it takes 5 time units for the data to reach the processor from the cache. Thus if  “Data1” is stored temporarily in the cache, the processor would be able to access it faster as compared to when the data was in the RAM.

But the same technology which makes the cache work faster also makes cache very expensive thus we can not use as much cache as we want in our systems.

The processor first checks for data in the cache, if the requested data is available in the cache then it need not go to the RAM.

The cache is turn acts as a temporary storage for the data present in the RAM. Whenever data is fetched from the RAM it gets stored in the cache. The processor will make use of this data as long as it is available. Thus reducing the time it takes for the processor to get the data to work on and hence making the processor work faster.

But that also brings with it a new problem, the processor accesses the data from the memory based on the addresses, i.e if the processor wants to get “Data1”, it will send the address of “Data1”. So if we move the “Data1” to cache how would the processor know the data stored in cache comes from which memory location ?
This brings us to the next topic “Addressing In cache”

0 comments:

Post a Comment