A two-level cache hierarchy of L1 and L2 with 2 and 3 blocks respectively is designed. Both L1 and L2 are fully-associative with LRU replacement policy. A sequence of references (block addresses from left to right, denoted as letters) is given in the table. Both caches are empty initially. You need to simulate the contents of L1 and L2 for the given sequence. Note that each request goes to L1 first. A request is issued to L2 only if it misses L1. In case of a L2 hit, the requested block is fetched from L2 and placed into L1, both in the MRU position. In case of a L2 miss, the block is loaded from memory into both L1 and L2 caches in the MRU position. The cache contents are displayed by the block addresses from MRU position to LRU position, separated by a comma
A two-level cache hierarchy of L1 and L2 with 2 and 3 blocks respectively is designed.
Both L1 and L2 are fully-associative with LRU replacement policy. A sequence of references
(block addresses from left to right, denoted as letters) is given in the table. Both caches are
empty initially. You need to simulate the contents of L1 and L2 for the given sequence. Note
that each request goes to L1 first. A request is issued to L2 only if it misses L1. In case of a L2
hit, the requested block is fetched from L2 and placed into L1, both in the MRU position. In
case of a L2 miss, the block is loaded from memory into both L1 and L2 caches in the MRU
position. The cache contents are displayed by the block addresses from MRU position to
LRU position, separated by a comma.
Step by step
Solved in 2 steps