Andy Zeng CS 194-26

Overview

Our objective is to produce color images from the digitized Prokudin-Gorskii glass plate images. We want to stack three color channel images (blue, green, and red) and align them to form a single RGB color image.

Naive Implementation

Given a digitized glass plate image, we split it into equal thirds. The first section is the blue channel image (B), the second is the green channel image (G), and the third is the red channel image (R). To align G to B, we exhaustively search over a window of possible displacements, and save the displacement vector that yields the best score, computed from some image matching metric to compare B and the shifted (via the displacement vector) G. We then shift G by the best displacement vector, and stack it on B. We perform the same process again to align R to B.

Image Pyramids

To handle alignment for larger images, where an exhaustive search of displacements can be pretty slow, we implement a faster search procedure - image pyramids. Given two images, the alignment process recursively resizes the images at multiple scales (by factors of 2) and respectively aligns them from the coarsest scale (smallest image size) to the finest scale. This allows a gradual estimation of the best displacement with a much smaller search window, effectively reducing run time.

Pre-alignment Cropping

Prior to alignment, 10% of the image width/height was cropped away from each side to remove the messy noise.

Features (Bells & Whistles #1)

As per naive implementation, aligning glass plate images using the SSD (which seems to outperform NCC) of raw pixel intensities as an image matching metric actually does quite well for most test images. However, there are occasional misalignments, e.g.

B Raw
G Raw
R Raw
(Raw) G: [49,24] R: [100,-206]

Note that the man's outfit yields high raw pixel intensity values in B, but not R. Hence when aligning R to B based on the SSD of raw pixel intensity, a good displacement vector could backfire with a very poor score.


So to fix the problem and capture this variation, we want to measure image similarity with gradients, so we turn to edge detection. I used Dollar and Zitnick's Fast edge detection using structured forests (pdf). Capable of multi-scale edge detection, this edge detector features superior run-time complexity while maintaining state-of-the-art edge detector results. After changing the image matching metric to take the SSD over the edge response maps of the glass plate images, alignment accuracy improves, e.g.

B Edges
G Edges
R Edges
(Edge) G: [50,23] R: [107,40]
(Raw) G: [56,8] R: [116,11]
(Raw) Close-up
(Edge) G: [55,9] R: [118,13]
(Edge) Close-up

Measuring image similarity with edge detection alone works well overall, but occasionally we would run into some minor alignment inaccuracies that we didn't have to deal with when we used raw pixel intensity, e.g.

(Edge) G: [48,13] R: [108,10]
(Edge) Close-up
(Raw) G: [53,14] R: [112,11]
(Raw) Close-up
(Edge) G: [3,3] R: [8,3]
(Edge) Close-up
(Raw) G: [3,3] R: [6,3]
(Raw) Close-up

Interestingly, these cases performed better under raw pixel intensity SSD than under edge response SSD. This indicated that the edge detection method was susceptible to losing image-specific information. So to fix this, I changed my image matching metric to use raw pixel SSD if they were comprehensively similar in intensity (if the absolute value of the relative difference over the mean pixel intensities was less than some small threshold), and use edge response SSD if they were not.

Cropping (Bells & Whistles #2)

After alignment, prior to combining the shifted glass plates images into a single color image, we compute the cropping parameters for each individual glass plate image. After generating the color image, we combine these parameters (via tightfit) to crop the final image. Our objective is to filter out as much boundary noise as possible, while keeping as much of the original image as possible.


Approach: Given a single channel image, we first run edge detection to obtain an edge response map. We then horizontally average the edge response map to obtain a vector whose length is the image's height. We compute a threshold equal to 2 standard deviations above the mean over all values in this vector. We perform a sequential search among the first 8% of values in the vector from right to left, to find the index of the first value encountered to be higher than the threshold. Similarly, a sequential search from left to right is performed over the last 8% of the vector. Conceptually, this gives us an approximate location of the semantic boundary of the image. The indices are then saved as the top and bottom cropping bounds respectively. A similar process is done with the vector generated by vertically averaging the edge response map, to obtain the left and right cropping bounds.

Shifted R Raw
Edge Response Map
Bar Graph of Horizontal Average
Cropped R

Sample cropping results

Before
After
Before
After

Automatic white balancing & contrasting (Bells and Whistles #3 & #4)

AWB Approach: We assume that the brightest pixel is white - we then re-scale the pixel intensities for each color channel accordingly. (I did not simulate a neutral illuminant by shifting the average color to gray, since that method made many images appear too "red")

Contrast Approach: Rescale all pixel intensities by YUV luminance value such that the near-brightest pixel (99th percentile of pixel intensity) is 1, and the near-darkest pixel (1st percentile of pixel intensity) is zero.

Original
AWB
Contrast
AWB + Contrast
Original
AWB
Contrast
AWB + Contrast

Run Time

Colorization (small images): ~1-2 seconds
Colorization (large images): ~60 seconds
Colorization + Cropping (small images): ~1-2 seconds
Colorization + Cropping (large images): ~80-100 seconds
Colorization + Cropping + AWB + Contrast (small images): ~1-3 seconds
Colorization + Cropping + AWB + Contrast (large images): ~80-100 seconds

Sample Images

G: [14,-9] R: [69,5]
G: [5,2] R: [12,3]
G: [50,23] R: [107,40]
G: [59,16] R: [124,13]
G: [55,9] R: [118,13]
G: [85,10] R: [178,13]
G: [-3,2] R: [3,2]
G: [52,27] R: [108,36]
G: [79,27] R: [176,35]
G: [7,0] R: [15,-1]
G: [53,14] R: [112,11]
G: [3,3] R: [6,3]
G: [42,8] R: [86,33]
G: [56,21] R: [116,28]
G: [64,11] R: [137,22]
(Extra) G: [55,7] R: [121,32]
(Extra) G: [52,-22] R: [107,-55]
(Extra) G: [13,-8] R: [134,-14]
(Extra) G: [-22,17] R: [-2,19]
(Extra) G: [21,17] R: [55,29]
(Extra) G: [26,20] R: [71,33]
(Extra) G: [74,9] R: [115,8]
(Extra) G: [59,18] R: [127,21]
(Extra) G: [57,12] R: [107,24]