The median score of this test is 59 and the mean is 55, consistent with the fact that it was a long and difficult test. Since the course grade is curved, you should measure your performance relative to other students in the class. A more detailed distribution is:
score number <40 4 40-44 6 45-49 7 50-54 7 55-59 9 60-64 12 65-69 7 70-74 3 75-79 4 >=80 4
This is a good problem because there are 3 good solutions:
Here's the code:
Memory Code Comment location 00 11a3 Load R1 with [a3] 02 2200 R2 is our total: Set R2=0 04 2040 Load R0 with 64 ("40" hexadecimal!) 06 2300 R3 is our loop counter: Set R3=0 08 2401 R4 is our increment: Set R4=1 [Here's the start of the loop. Note the memory location is 0a, not 10] 0a 5221 Set R2 = R2+R1 0c 5334 Set R3 = R4+R3 (in other words, R3++;) 0e B312 If R3==64, Jump forward to 12 10 B00a Else R3<64, Jump back up to 0a [That was the loop.] 12 32a4 Store The total in a4 14 C000 STOP THE PROGRAM.IMPORTANT NOTES:
Here's the code for this solution:
Memory Code Comment location 00 11a3 Load R1 with [a3] 02 2040 Load R0 with 64, just like last time 04 2301 R3 is our loop counter: Set R3=1 06 5111 Set R1 = R1+R1: double R1 08 5333 Set R3 = R3+R3: double R3 0a B30e If R3==64, Jump forward to 0e 0c B006 Else R3<64, Jump back up to 06 0e 31a4 Store The total in a4 10 C000 STOP THE PROGRAM.We use fewer registers, and the program is smaller. Neat, but the important thing is to get a working loop, and the first solution might be more helpful in the design of loops. Consider the following code template:
20xx -- load R0 with stopping value of loop (in hexadecimal) 2300 -- load R3 with the initial loop counter 2401 -- load R4 with the loop increment [Remember how a loop has three components? There they are: Initial condition (R3), stopping condition (R0), increment (R4).] ... -- Loop code: Put stuff here you want to do in a loop ... -- Loop code begins at some address yy 5334 -- counter=counter+increment B3xx -- If counter==stopping value, jump OUT of loop B0yy -- Else, jump BACK UP to top of loop ... -- Rest of program, starting at address xx. C000
00 11a3 Load R1 with [a3] 02 A102 Rotate R1 right 2 bits 04 31a4 Store The total in a4 06 C000 STOP THE PROGRAM.Assuming [A3] is small enough that it can be multiplied by 64 and still fit in a byte, this program multiplies it by shifting the actual bits around. The last two bits are taken off the right side of the number, and tacked on to the left side! For instance, the binary number 00000011 is turned into 11 000000. In decimal, we've just turned 3 into 3*64.
Again, this is a neat little trick that illustrates the degrees of efficiency we can obtain by using different algorithms for the same problem.
Part a The scheduler already uses priorities to decide which process to schedule next. Without "nicing", all processes would start with the same priority; with "nicing", "niced" processes would start with a lower priority. The priority of a process would change as it sat in the ready queue without being scheduled. When a process is interrupted, the scheduler would
You may have lost a point or two on this part of the problem with little or no comment. These points considered whether your solution guaranteed that "niced" processes eventually get a turn even if regular-priority processes keep arriving and whether your solution still gave less priority to "niced" processes even if regular processes kept needing I/O before finishing a time slice.
Part b It is more important to "nice" compute-bound processes. I/O bound processes keep stopping to wait for peripheral devices. The normal time-sharing/multitasking mechanism allows another process to run (use the CPU) while I/0 is taking place, putting the process waiting for I/O on the waiting queue. No "nicing" mechanism is necessary to utilize the waiting time. However, a compute-bound process will use all of its time slice. If there is a long compute-bound process that gets a turn, say, every 5th time slice, it will be getting 1/5 of the CPU, slowing down the machine in the eyes of all other processes by 1/5.
You'll notice that no swaps were performed in this example. After each bubble sort we find that the element we want to insert is already in place, and leave it there.
|
So, what comparisons are performed in this binary search? We'll choose a "middle" rule that rounds up: a sublist starting at entry a and ending at entry b has a middle entry of (a+b)/2 rounded up, or [(a+b+1)/2] rounded down. A different rule was okay, but it had to be used consistently---no rounding up in one step and rounding down in another. One also had to keep properly keep track of the subarray bounds, which were different in each iteration of the algorithm.
Starting with the subarray 1..8, Sally is compared to the center of the list, entry [(1+8+1)/2], or entry 5: Jane. "Sally" > "Jane", so we then compare Sally to the middle of the second half: entry [(6+8+1)/2], or 7: Paul. "Sally" > "Paul", so we compare to entry [(8+8+1)/2]: Rob. Sally should be placed at the end, where the name already is, so no swapping occurs.
In iteration 10 we want to insert entry 10 (Tina) into the subarray of entries 1..9 (Ann..Sally.) Notice that the list has a different size now, and different comparisons will be made. First, Tina is compared with entry [(1+9+1)/2]=5: Jane. "Tina" > Jane," so Tina is compared with entry [(6+9+1)/2]=8: Rob. "Tina" > "Rob", so now we compare with entry [(9+9+1)/2]=9: Sally. After this comparison, we find that "Tina" is in the correct array slot, and again, no moves need to be performed.
A sort that performs only n*log n comparisons tends to be better than one that performs n^2 comparisons, because n^2 increases faster as n goes up. More comparisons, slower program.
But here's the ugly part, and it answers the second part of this question: what happens when we run this program on a randomly ordered list? Sure, there are only n*log n comparisons, but we also have to move items around when they're out of order. How many moves? If, in iteration k of the loop we find an element is out of order, we have to sift some of the first k elements around in order to make room for the new item. This turns out to be a lot of sifting, on the order of n^2 moves! So it isn't a fast sort after all, in general. On a list that's in order, it should run speedily. But, if a list is already in order, why sort it?
[How do we know the sort performs on the order of n^2 moves? We have n iterations, and in iteration k we'll have to, on average, insert the kth item somewhere in the middle of the already sorted part. That means moving k/2 items out of the way each time. This gives about ( 1 + 2 + 3 ... (n-1) )/2 = n(n-1)/4 moves.]
Part a The execution of fillarray(actualA,1,7,0) and subsequent recursive executions is as follows:
fillarray(actualA,1,7,0) executes: actualA[1]=0; execute fillarray(actualA,2,7,1): actualA[2]=1; execute fillarray(actualA,4,7,2): actualA[4]=2; execute fillarray(actualA,8,7,3): 8 > 7 so do nothing execute fillarray(actualA,9,7,3): 9 > 7 so do nothing execute fillarray(actualA,5,7,2): actualA[5]=2; execute fillarray(actualA,10,7,3): 10 > 7 so do nothing execute fillarray(actualA,11,7,3): 11 > 7 so do nothing execute fillarray(actualA,3,7,1): actualA[3]=1; execute fillarray(actualA,6,7,2): actualA[6]=2; execute fillarray(actualA,12,7,3): 12 > 7 so do nothing execute fillarray(actualA,13,7,3): 13 > 7 so do nothing execute fillarray(actualA,7,7,2): actualA[7]=2; execute fillarray(actualA,14,7,3): 14 > 7 so do nothing execute fillarray(actualA,15,7,3): 15 > 7 so do nothing doneSo at the end of the execution, the values in array actualA are:
actualA[1]=0 actualA[2]=1 actualA[3]=1 actualA[4]=2 actualA[5]=2 actualA[6]=2 actualA[7]=2
Part b The "base case" for this recursion, i.e, the condition that causes execution to complete, is for start to be greater than end. Since each time fillarray is called recursively, start is at least doubled and end is held constant, start must eventually exceed end as long as both are positive to begin with.