CS 426 Exercises
   Image Processing

  1. "A pixel is a sample, not a little square."

  2. What are the implications of this statement on image processing algorithms?
  3. If a pixel is an infinitely small sample, how is it visible on the screen of a CRT display?

  4. Display on a CRT is most similar to what reconstruction filter?
  5. What is intensity quantization?  When does it happen?  How can we compensate for it?
  6. True or false: dithering spreads quantization error among pixels.
  7. How many samples are required to represent a given signal without loss of information?
  8. What signals can be reconstructed without loss for a given sampling rate?
  9. When is a signal bandlimited?  What is the Nyquist rate for a bandlimited signal?
  10. What is aliasing?  When does it happen?  Give three examples?
  11. What is antialiasing?  How does antialiasing compare with dithering?
  12. Convolution in the spatial domain is equivalent to what operation in the frequency domain?
  13. What is a sinc reconstruction filter?  What are its properties?  Why don't we use sinc filters for reconstruction in practice?
  14. Write a convolution filter well-suited for edge detection. Same for blurring.
  15. Compare forward mapping and reverse mapping for image processing.  What are the advantages and disadvantages of each method?
  16. What is the meaning of the following rgba tuples: (1,1,1,1), (1,1,1,0.5), (0.5,0.5,0.5,1), (1,1,1,0)?
  17. What is the resulting pixel color of: (1,0,0,0.5) over (0,1,0,0.5) over (0,0,1,0.5)?