The first two questions involve a virtual memory system. Here are some facts provided by the assignment sheet:
Let's stop here and answer the first part of the question: a) How much virtual memory is available in the system described above in bytes? in pages? From the above 3 pieces of information, you should see that there are 2^32 = 4,294,967,296 bytes, or 4 Gigabytes of virtual memory. That's the number of bytes that we can address, because we have 32-bit addresses.
There are only 16 Megabytes of actual physical memory, or 2^24 bytes. (This is why the actual physical addresses range from 00000000 to 00ffffff --- 00ffffff is hexadecimal for (2^24)-1.) Virtual memory creates the illusion of 4 Gigabytes of memory, one byte for every possible 32-bit address, by storing as much of it as possible in physical memory, and keeping the extra memory stashed away on disk.
Oh, and how many pages? Each page is a kilobyte in size, or 2^10 = 1024 bytes. So there are 2^32/2^10 = 2^22 = 4,194,304 pages.
More facts:
b) What is the advantage of placing a page in physical memory so that the first byte is at a physical memory address ending in 10 bits of zero? There are several good answers to this. The main reason to do this is that addresses are easy to translate from virtual addresses to physical addresses. Whenever a virtual address is translated into a real address in physical memory, the last 10 bits of the virtual address will be the same as the last 10 bits of the physical address. In other words, we don't need to do any mathematics, not even addition, to compute the physical address from the virtual address.
Example: let's say we have a virtual address of 333338ee (hex), or in binary 00110011001100110011010011101110. What is the address of the start of the page? 33333800, the address with the last 10 bits zeroed out. The byte we're addressing is 0011101110, or 238, bytes past the start of the page.
We can translate the page address from 33333800 to whatever the address is in physical memory, by looking it up in the page table. Let's say it turns out to be 00111c00, or binary 00000000000100010001110000000000. That's the address of the start of the page in physical memory. We now add 0011101110 to this to get the address of our byte, 238 bytes past the start of the page. However, we don't need to add this: we just replace the last 10 bits of the new address with our offset: 00000000000100010001110011101110.
We just saved ourselves the trouble of doing a binary addition. Woo. Question: why do we want a computer that doesn't have to do any arithmetic to compute addresses, not even a single addition, when computers can add really well and really quickly? Answer: if we had to perform an addition every time we wanted to find a byte in memory, we'd be performing an incredible amount of adds, and all that extra work would result in a massive slowdown.
Another advantage, mentioned by some students, is that this addressing scheme guarantees that pages never accidentally overlap in memory.
We already performed an address translation in the last problem. What's different about this one? Well, if virtual addresses 10000000 through 10ffffff are currently in physical memory, then we know that physical memory is full. We want to access a new address, loading a new page into physical memory, when there is no more room.
No problem. The memory manager sees that the new address resides on the page with address F1111800 (the byte we want is 1F2 (hex), or 498, bytes past the beginning of the page.) The page address is looked up in the page table to reveal that it isn't in memory, but lies on the disk. It must be loaded into memory, meaning that some page currently in memory must be swapped out to disk in order to make room.
Which page is swapped out? This depends. Perhaps the computer is designed to swap out the least recently used page of memory, since it may be the page least likely to be accessed again. The computer can use other strategies, but with the same goal: to swap out a page which isn't important. Let's say that it swaps out the page at physical address 00333c00. Now the page on disk with address F1111800 is completely loaded into memory, and we can translate the byte F11119F2 into the physical address 00333dF2.
1 Andrew Appel 2 Sanjeev Arora 3 David August 4 Bernard Chazelle 5 Douglas Clark 6 Perry Cook 7 David Dobkin 8 Edward Felten 9 Adam Finkelstein 10 Thomas Funkhouser 11 Brian Kernighan 12 Andrea LaPaugh 13 Kai Li 14 Richard Lipton 15 Larry Peterson 16 Robert Sedgewick 17 Jaswinder Singh 18 Mona Singh 19 Kenneth Steiglitz 20 Robert Tarjan 21 Randy Wang 22 Kevin Wayne 23 Andrew Yaoa)Give the sequence of names examined to find Mona Singh using the binary search algorithm.
The number of our first entry is 1, and the number of our last entry is 23. The middle entry, (1+23)/2, is 12. Andrea LaPaugh. By alphabetical order, Mona Singh should occur after this, so we concern ourselves with the entries numbered from 13 to 23.
Now, with the "first" entry of our new list numbered 13, the new middle entry is (13+23)/2 = 18. Mona Singh. We found what we're looking for, so we stop.
b)Give the sequence of names examined to find Andrew Appel using the binary search algorithm.
c)Vivek Pai will be arriving in January. Give the sequence of names examined to discover that his name is not yet in the list using the binary search algorithm.
At this point we stop, because there are no more entries between our starting and ending index, no more entries in the part of the list where "Vivek Pai" should be.
Here's the computer program:
assign Count the value one-tenth; repeat print the value assigned to Count; assign Count the value (Count + one-tenth); until (Count equals 1 )
What's wrong with this? To start with, there is no such number as one-tenth, exactly, in floating point notation. This is a serious problem, because if we add up a bunch of values that are not quite one-tenth, we may never get exactly 1. If it doesn't ever exactly equal one, the loop may never halt. On the computer I'm using to write these solutions, for example, I implemented this as a small computer program and executed it, and it goes on effectively forever.
How to avoid this? If we made the last line of the program: "until (Count equals-or-is-greater-than 1)," then the program will at least halt. In any program using floating-point numbers, it is unwise to check if a number is exactly equal to a value; instead, one should check if it falls within a given range.
Some people pointed out that, using Brookshear's floating point notation, one-tenth is rounded to one-sixteenth or one-eighth. This isn't true: one-tenth is represented as 3/32. If one-tenth was represented as one-sixteenth, the value of Count will eventually equal exactly 1, and the loop will stop, even though it iterates 16 times rather than 10. Even if something like this happens, it is plain good luck. Real-world computers use different kinds of floating-point notation, and one can never be certain that the same kind of round-off error occurs on all computers.