Due by 5 PM, Wednesday (not Monday) Nov. 29, 2000
1. Exercises 7.18 and 7.19 from the text. You'll need SRAMs for both data and tag parts of the caches.
2. Merge Exercises 7.33 and 7.34 and make one unified picture of the TLB and cache. Be careful to show the widths of everything.
3. What's good about a direct-mapped cache whose size is less than or equal to the virtual memory page size? Hint: try using such a cache in your picture from question 2. How can adding associativity achieve the same good for bigger caches?
4. (counts as two questions) Here's an idea for speeding up cache access without going all the way to a virtual cache: use a virtual index but a physical tag. In other words, use bits from a reference's virtual address as the index, but store in the tag part of the cache bits from the reference's physical address. Could this idea be made to work? Draw a picture of a TLB and cache using this idea, and show the circuit for the cache hit signal, including exactly which bits get compared with which other bits to produce the address match. Assume all this: 32-bit virtual (byte!) addresses, 30-bit physical addresses, 1 Kbyte pages, 64-byte cache blocks, direct-mapped cache and TLB, and cache size 128 Kbytes. Have the data part of the cache deliver 32-bit words (as usual). What are the advantages and the drawbacks of this idea, in your view?