Examples for Assignment 1

Note: You could hover on any images to see the original(high resolution) ones.

Brightness

 brightnessFilter( image, ratio )
Changes the brightness of an image by blending the original colors with black/white color in a ratio. (When ratio>0, we blend with white to make it brighter; when ratio<0, we blend with black to make it darker.)
 
brightness1.png (1001 bytes)
-1
brightness2.png (12393 bytes)
-0.5
brightness3.png (12403 bytes)
0
brightness4.png (11830 bytes)
0.5
brightness5.png (10715 bytes)
1

Contrast

 contrastFilter( image, ratio )
Changes the contrast of an image by interpolating between a constant gray image (ratio=-1) with the average luminance and the original image (ratio=0). Interpolation reduces contrast, extrapolation boosts contrast, and negative factors generate inverted images. See Graphica Obscura and Wiki_Contrast (In Graphica Obscura' version, the slider is not uniformly ranged, so we remap the range [-1,1] to [0, infinite] using the following formula which is mentioned in Wiki_Contrast):
 
value = (value - 0.5) * (tan ((contrast + 1) * PI/4) ) + 0.5;
contrast1.png (12278 bytes)
-1
contrast2.png (1001 bytes)
-0.5
contrast3.png (12285 bytes)
0
contrast4.png (12403 bytes)
0.5
contrast5.png (12403 bytes)
1

Gamma Correction

 gammaFilter( image, logOfGamma )
Changes the image by applying gamma correction, V_out = Math.pow(V_in, \gamma), where \gamma = Math.exp(logOfGamma).
 
gamma1.png (12278 bytes)
-1
gamma2.png (1001 bytes)
-0.5
gamma3.png (12285 bytes)
0
gamma4.png (12403 bytes)
0.5
gamma5.png (12403 bytes)
1

Histogram Equalization

 histeqFilter( image )
Increase the contrast of the image by histogram equalization in HSL’s L channel, that is, remapping the pixel intensities so that the final histogram is flat. A low contrast image usually clumps most pixels into a few tight clusters of intensities. Histogram equalization redistributes the pixel intensities uniformly over the full range of intensities [0, 1], while maintaining the relationship between light and dark areas of the image.
 

Before

After

Saturation

 saturationFilter( image, ratio ) 
Changes the saturation of an image by interpolating between a gray level version of the image (ratio=-1) and the original image (ratio=0). Interpolation decreases saturation, extrapolation increases it, and negative factors preserve luminance but invert the hue of the input image. See Graphica Obscura, its parameter alpha=1+ratio in our slider.
 
saturation1.png (12584 bytes)
-1.0
saturation2.png (18237 bytes)
-0.5
saturation3.png (12764 bytes)
0
saturation4.png (12403 bytes)
0.5
saturation5.png (12403 bytes)
1

White Balance

whiteBalanceFilter( image, hex )
Adjust the white balance of the scene to compensate for lighting that is too warm, too cool, or tinted, to produce a neutral image. Use Von Kries method: convert the image from RGB to the LMS color space (there are several slightly different versions of this space, use any reasonable one, e.g. RLAB), divide by the LMS coordinates of the white point color (the estimated tint of the illumination), and convert back to RGB. (Image source)
 

Before correction: too warm

After correction: neutral

given white hex: #cee2f5

given white hex: #f5cece

Vignette

 vignetteFilter( image, value ) 
Darkens the corners of the image, as observed when using lenses with very wide apertures (ref). In the function, the value sets the innerRadius and outerRadius. It can be a value (which sets the innerRadius and makes outerRadius as default=1) or an array such as [0.1,0.7] or (0.1,0.7) which sets both the innerRadius and the outerRadius. The code for getting innerRadius and outerRadius from the value is already given. The image should be perfectly clear upto innerRadius, perfectly dark (black) at outerRadius and beyond, and smoothly increase darkness in the circular ring between. Both are specified as multiples of half the length of the image diagonal (so 1.0 is the distance from the image center to the corner).

Note: the vignetting ring should be a perfect circle, not an ellipse. Camera lenses typically have circular apertures, even if the sensor/film is rectangular.
 

value=0.5, innerR:0.25, outerR:1

value=1, innerR:0.5, outerR:1

value=(0.5,0.5), innerR:0.25, outerR:0.75

value=(1.0,0.5), innerR:0, outerR:0.75

Historgram Matching

 Filters.histMatchFilter = function( image, refImg, value ) 

Adjusts the color/contrast of the input image by matching the histgram to refImg images. value can be used to control whether match the luminance or rgb channel. The results in the first row below use the histogram matching in rgb channels, while the results in the second row matched only in luminance.

histmatch1.png (3642 bytes)
reference image: town
histmatch1.png (6953 bytes)
reference image: flower
histmatch2.png (3642 bytes)
reference image: town
histmatch2.png (6953 bytes)
reference image: flower

Quantization

 quantizeFilter( image, numBits ) 
Converts an image to numBits bits per channel using uniform quantization.

The number of output levels per channel is numBits, which are evenly distributed so that the lowest level is 0.0, the highest is 1.0. Every input value is to be mapped to the closest available output level.
 

quantize1.png (3642 bytes)
2
quantize2.png (6953 bytes)
4
quantize3.png (10603 bytes)
8
quantize4.png (11920 bytes)
16
quantize5.png (12496
    bytes)
32

Random Dither

 randomFilter( image, numBits ) 
Converts an image to numBits bits per channel using random dithering. It is similar to uniform quantization, but random noise range in each unit is added to each component during quantization, so that the arithmetic mean of many output pixels with the same input level will be equal to this input level.
 
random1.png (6166 bytes)
2
random2.png (11112 bytes)
4
random3.png (14954 bytes)
8
random4.png (13871 bytes)
16
random5.png (13053 bytes)
32

Ordered Dither

 orderedFilter( image, numBits ) 
Converts an image to numBits bits per channel using ordered dithering. The following examples used the pattern
Bayer4 = 15 7 13 5
3 11 1 9
12 4 14 6
0 8 2 10

The values can be used as thresholds for rounding up or down as described in the lecture slides.
orderedDither1.png
2
orderedDither2.png
4
orderedDither3.png
8
orderedDither4.png
16
orderedDither5.png
32

Floyd-Steinberg Dither

 floydFilter( image, numBits ) 
Converts an image to numBits per channel using Floyd-Steinberg dither with error diffusion. Each pixel (x,y) is quantized, and the quantization error is computed. Then the error is diffused to the neighboring pixels (x + 1, y), (x - 1, y + 1), (x, y + 1), and (x + 1, y + 1) , with weights 7/16, 3/16, 5/16, and 1/16, respectively.
 
floyd1.png
2
floyd2.png
4
floyd3.png
8
floyd4.png
16
floyd5.png
32

Gaussian Filter

 gaussianFilter( image, sigma )
Blurs an image by convolving it with a Gaussian filter. In the examples below, the Gaussian function used was

  G(x) = exp(-x^2/(2*sigma^2))
 
and the number below each image indicates the sigma of the filter. You set the filter window size to Math.round(2*sigma)*2+1.
 
blur0
1
blur1
2
blur2
3
blur3
4
blur4
5

Median Filter

 medianFilter( image, winR )
Blurs an image by replacing each pixel by the median of its neighboring pixel((2*winR+1)x(2*winR+1)). So the results in the first row are done by doing median filter in RGB channel separately, the results in the second row are done by getting the median pixel sorted by their luminance values.
median1
1
median2
2
median3
3
median4
4
median5
5
median1
1
median2
2
median3
3
median4
4
median5
5

Bilateral Filter

 bilateralFilter( image, value ), where value = sigmaR(sigmaS=1 in this case) or [sigmaR, sigmaS] 
Blurs an image by replacing each pixel by a weighted average of nearby pixels. The weights depend not only on the euclidean distance of pixels but also on the pixel difference, for the pixel difference it could either be luminance difference or L2 distance in color space. Consider the pixel I(i,j) located in (i,j), the weight of pixel I(k,l) follows the following equation:

 
 
In the examples below, to make the weight for pixel and spatial more fair, we divide the sigmaR by sqrt(2)*winR. If we don't take this factor into consideration, the result will be dominated by the spatial term, and will have similar results as gaussian blur. You set the filter window size to 2* Math.round( max(sigmaR,sigmaS)*3 ) + 1.
bilateral1
sigmaR=1
bilateral2
sigmaR=2
bilateral3
sigmaR=3, sigmaS=0.5
bilateral4
sigmaR=4, sigmaS=2
bilateral5
sigmaR=5, sigmaS=3

Sharpen Filter

 sharpenFilter( image )
Sharpen edges in an image by convolving it with the edge kernel as belows and add it to the original image:
-1
-1
-1
-1
8
-1
-1
-1
-1

 

Scale

 scaleFilter( image, ratio )
Scales an image in width and height by ratio. The result depends on the current sampling method (point, bilinear, or Gaussian). In the example below, gamma=1, the window radius of the Gaussian filter is 3, ratio = 0.7.
 
scale1.png (26874 bytes)
Point
scale2.png (29226 bytes)
Bilinear
scale3.png (26874 bytes)
Gaussian

Rotate

 rotateFilter( image, ratio )
Rotates an image by the given angle, in radians (a positive angle implies clockwise rotation) . The result depends on the current sampling method (point, bilinear, or Gaussian). In the example below, gamma=1, the window radius of the Gaussian filter is 3, ratio = 0.2.
 
rotate1.png (13595 bytes)
Point
rotate2.png (13842 bytes)
Bilinear
rotate3.png (13842 bytes)
Gaussian

Swirl

 swirlFilter( image, ratio )
Warps an image using a creative filter of your choice. In the following example, each pixel is mapped to its corresponding scaled polar coordinates, here ratio = 0.4.
 
swirl1.png (8490 bytes)
Point
swirl2.png (6775 bytes)
Bilinear
swirl3.png (10541 bytes)
Gaussian

Composite

 compositeFilterFile( backgroundImg, foregroundImg, alphaImg ) or
compositeFilter( backgroundImg, foregroundImg, alpha )
Composites the foreground image over the background image, using the alpha file of the foreground image as a mask or using a alpha parameter to blend two images. You can use an image editor of your choice to generate the input images you need.
 

backgroundImg

foregroundImg

alphaImg

Result

backgroundImg

foregroundImg

alphaImg

Result

Morph

 morphFilter( initialImg, finalImg, lines, alpha ) 
Morph two images using [Beier92]. initialImg and finalImg are the before and after images, respectively. lines are corresponding line segments to be aligned. alpha is the morph time: it can be a number between 0 and 1 indicating which point in the morph sequence should be returned, or can be (start:step:end) to define a morph sequence.
 

0

0.11

0.22

0.33

0.44

0.56

0.67

0.78

0.89

1
Here is an animation of the sequence:
And here is the morph for parameters alpha=(0:0.1:1) for the example images and morph lines provided with the assignment zip, together with a still frame at alpha=0.5:

Palette

 paletteFilter( image, colorNum )
extracts colorNum colors as a palette to represent colors in the image. Here we use k-means method to extract color palette in the image with grid acceleration.
palette.jpg (2251 bytes)
colorNum = 2
palette.jpg (2197 bytes)
colorNum = 3
palette.jpg (2013 bytes)
colorNum = 5