CS 111 Fall 1999 Exam 1 solutions Problem 1 (graded by Andrea LaPaugh) I was looking not only for a correct algorithm, but also for a precise specification of the algorithm using the basic operations. If you got the basic idea of a correct algorithm but did not express it precisely, you got partial credit; the amount of partial credit depended on how well you expressed your algorithm. On possible algorithm: i = 1 repeat until i > the number of items in the list examine the ith item in the list if the ith item in the list is a mammal or a bird then copy the ith item as a new item in the list of warm-blooded anmimals increment i end repeated instructions Problem 2 (graded by Elena Oranskaya) a) The decimal representation of 2's complement number 01001010 is 0*1 + 1*2 + 0*4 + 1*8 + 0*16 + 0*32 + 1*64 + 0*128 = 74 b) i) 011110 ii) 011110 101111 010001 -------- -------- 001101 101111 with carry out 1, which can be discarded c) Part ii) suffers from overflow. The reason is that a 6-bit 2's complement notation can only represent numbers in the range between -32..31, while addition of 30 and 17 gives 47. The error is also obvious from the answer. The result of the addition is -17, a negative number! That is an impossible result for addition of 2 positive integers. Alternatively, the reason is that the carry in for the last bit of addition (which is 1) is different from the carry out of the last bit of addition (i.e. 0). Problem 3 (graded by Andrea LaPaugh) x y z | NOT(y) AND NOT(z) | x OR (NOT(y) AND NOT(z)) |(intermediate result) | (answer) ---------|-----------------------|----------------------------- 0 0 0 | 1 | 1 0 0 1 | 0 | 0 0 1 0 | 0 | 0 0 1 1 | 0 | 0 1 0 0 | 1 | 1 1 0 1 | 0 | 1 1 1 0 | 0 | 1 1 1 1 | 0 | 1 Problem 4 (graded by Elena Oranskaya) a) The code cannot correct all 1-bit errors in transmission. Example: consider the transmitted code 101111. Suppose it was received as code 111111. It is clear from the parity bits that an error occurred. However, the code does not have an ability to determine which bit was flipped. The original signal could have been any of 011111, 101111 or 110111. b) Let us compute the Hamming distance between any two codes. Consider a 1-bit difference within the first three bits. This means that the two codes are of different parity, and therefore they will differ in each of the parity bits, making the Hamming distance 4. Now consider a 2-bit difference within the first three bits. They will have the same parity, and therefore will match in each of the parity bits, thus making the Hamming distance between them equal to 2. So, 2 is clearly the minimum Hamming distance that occurs. This means that one can get from a legal code, such as 101111, to another legal code, such as 011111, by flipping two bits. Thus, it is not the case that all 2-bit errors can be detected. What about 1-bit errors? If we change any single bit of the first three bits, the parity bits will be incorrect, and the error will be detected. If we change none of the first three bits, but change a parity bit, then the changed parity bit will be wrong and the error detected. Therefore, any 1-bit error is detectable. This code provides 1-bit error detection. There is a general fact that the error detection capability of a code is k-1 if the minimum Hamming distance between any two code words is k. This fact is a generalization of the fact given in class that repeating bits k times to form a code word gives k-1 bit error detection. Problem 5 (graded by Andrea LaPaugh) Part a Using spaces to show the repeat and suffix components of the bit string: 00(1,1,0)(3,3,0)(7,7,0)(15,15,0) = 00 0 0 (3,3,0)(7,7,0)(15,15,0) = 0000 000 0 (7,7,0)(15,15,0) = 00000000 0000000 0 (15,15,0) = 0000000000000000 000000000000000 0 = 00000000000000000000000000000000 = 32 0's Part b You will notice that the Lempel-Ziv encoding is not represented as bits, so asking if it is doing a good job of compressing the sequence of bits is a bit like asking to compare apples to grapes. But we can make a couple of observations: First, let's just count characters -- 32 "0"s versus the number of characters in the Lempel-Ziv encoding as written, which is also 32. No compression there. Now, using parentheses is a bit verbose, but we need some symbol to separate triples of the compression. For example, We might use a "|" : 00|1,1,0|3,3,0|7,7,0|15,15,0. This saves 4 characters, but does not make a dramatic difference. At the other end of the spectrum we can imagine that the Lemple-Ziv encoding is written in bits, with some undetermined way of delimiting the triples, and only count the number of bits needed to represent the numbers in the triples. This is an optimistic (low) estimate of how many bits the Lempel-Ziv encoding would take. Doing this, we get 2 bits for "00", 1 bit for each "1", 2 bits for each "3", 3 bits for each "7" and 4 bits for each "15", for a total of 22 bits. This analysis suggests that even at the bit level Lempel-Ziv is not doing a good job of compressing the bit sequence. Run-length encoding would do a better job of compressing 32 0's. The method we discussed in class uses 10 bits: one bit to indicate run-length encoding is being used, 1 bit for the "0", and 8 bits to represent the number of repetitions (up to 255), in this case 32. To be fair to Lempel-Ziv, it is doing a kind of run-length encoding, but limited to an ever-expanding prefix length. As one increases the number of 0's further, Lempel-Ziv does a better and better job: To represent 64 0's would only take appending the triple (31,31,0); to represent 128 0's would only take further appending the triple (63,63,0). Thus, counting characters, we would be representing 128 0's with 50 characters. Things get even better as the number of 0's gets larger.