Princeton University
|
Computer Science 526
|
This image contains 18 spheres. In the back are 9 fully refractive,
yellow spheres. The center, front sphere is a glowing white sphere. Around
it are refractive, cyan spheres in the corners and refractive, magenta
spheres on the sides. Notices the yellow caustics in the background, and
the cyan and magenta caustics and highlight. The front spheres appear red
and green because the light is bouncing off the back wall and through the
the yellow spheres. Rendering parameters are the same as for the snowman.
Render time was approximately 80 minutes.
The cornell box with a specular sphere. 1024 paths/pixel, jittered.
No photon mapping.
God only knows.
I used Bi-Directional Path Tracing by LaFortune and Willems as a reference
for this implementation. I succeeded in creating paths from the eye and
from the lights, and also succeeded in creating the necessary shadow rays
to connect points along these paths together. However, I failed (partly
from an incomplete understanding and partly from a lack of time) to correctly
weight the contributions of all the paths. The resulting images look like
these, which come from various attempts at weighting the different paths.
All of the figures below were rendered with 1 ray per pixel for speed while
debugging.
To help with debugging and to visualize the Phong distribution I modified
the rdraw program to display Phong-distributed rays and to allow for interactive
adjustment of the specular exponent (n). The following figures come from
that version of rdraw, showing the results for increasing values of n.
For each incoming ray (shown with a yellow line) this version of rdraw
plots the perfect specular reflection (black line), 100 diffuse reflections
(red lines) and 100 phong-specular reflections (blue lines).


Images with the floor set with a high specular index. The first has
the BRDF specular component set to 0,0,0, while the second has the BRDF
specular component set to 1,1,1. The second image creates an interesting
effect with the shadows that makes the edge of the right wall look almost
warped. The first image has 50 samples per pixel and the second has 20.
The second is also noiser due to more reflection and longer paths.
This image uses different specularities and reflections on the spheres,
the red-colored wall, and the ceiling. This has a high number of samples
(195 total samples per pixel, done using 15 passes with a 3x3 jittered
grid per pixel)
A collage of a few images that didn't go so well...
Nathaniel's movie (ART CONTEST
CO-WINNER)

The gray hemisphere is centered around the "x" point that lies on the
blue plane.
The yellow rays point to the verticies of the gray triangle above.
The blue rays are generated by Arvo's sampling algorithm.

Left: Variation on a Cornell box (I prefer green and purple).
Right: Another Cornell box variation. It's very easy to see the diffuse-diffuse
reflections in this example.
Each of these images also contains a perfectly specular sphere in the
corner.
343 spheres in a grid; 1 large triangular light source behind the camera
640x480 with 256 samples/pixel; lens diameter=1/10 for depth of field
Using the grid to accelerate intersections this took about 1 hour to
render on a 1.1GHz Athlon
Without using the grid this took about 1 hour to render only 16 samples
(rather than 256)


A series of 6 texture mapped triangles. All images were rendered at
640x480 with 256 samples/pixel.
Upper Left: No depth of field effects.
Upper Right: Closest triangle in focus. F=1, a=10, so lens diameter
= 1/10
Lower Left: 2nd triangle in focus. F=1.5, a=22.5, lens diameter = 1/15
Lower Right: 6th triangle in focus. F=3.5, a=122.5, lens diameter =
1/35
The Cornell Box: All the surfaces are diffuse. The above
image clearly shows indirect illumination by way of the color bleeding
that can be noticed on the back wall and (more prominently) in the shadows
of the two boxes. The white line along the front of the small box
is the result of performing an incorrect intersection test with a ray that
originates *very* close to the boxes edge. This image was rendered
from a total of 50 samples per pixel and the result was anisotropically
blurred.
Blue Ball on Shiny Marble Table: In this scene, the ball is completely
diffuse and the plane it rests on is strongly specular (with a specular
coefficient of n=2000). The plane is also texture mapped with a marble
image and the scene is lit with three spherical light sources that explain
the pattern of shadows around the ball. You can also see evidence
of diffuse interreflections that have illuminated the penumbra of these
shadows along with the bottom-side of the sphere which is not directly
lit. The specular reflection of the ball's image off the table is
also clearly visible giving evidence that the brdf of its surface is being
sampled importantly. This image was rendered from a total of 200
samples per pixel.
Shiny Cornell Box: In this image the walls of the Cornell Box
have both a specular and diffuse component. The specular coefficient
is quite high, which accounts for the sharp reflections of the room.
This image illustrates the recursive depth of the algorithm in that it
approximates (albeit roughly) the infinite cascade of interreflections
between the walls.
Cornell Box With Shiny Balls: This image is the same Cornell
Box we've seen before, but with highly specular spheres. The scene
is lit with a single spherical light source. Please accept this image
as my entry to the art contest.

36 samples, 642.91s (one per light)
36 samples, 262.92s (six per ray)
Instead of shooting one or more light rays for each light source, the
idea is to shoot one or more light rays for each intersection, in an unbiased
way. That way, the rendering time is less dependent on the number of light
sources. It is easy to see that the linear method is faster, even shooting
six light rays per intersection. The scene lights.ray has 18 triangular
light sources. It is hard to tell which image is more accurate, but it
seems that the closest face of the cube is getting more light when the
linear method is used. That is because that method will give more importance
to the light sources that are visible to that face.
900 samples, 18051.64s (~5h)
My brazilian flag
Without photon mapping -524 seconds
With 300,000 photons. 497 seconds
Cornell box with a (octagonal -- look at shadow) spring.
There are ~500 polygons here. It took less than 5 minutes for
16 rays.
Double helix
Img. 1 - r1.ray
rmonte r1.ray i1.bmp 12 16 16 -verbose -russian
5865.796875 seconds @ 3072 Samples / Pixel
Img. 2 - r2.ray
rmonte r2.ray i2.bmp 12 16 16 -verbose -russian
2714.843018 seconds @ 3072 Samples / Pixel
Both Img. 3 and Img. 4 are images of a Cornell Box. In each image global indirect illumination causes the top of the box to be lit as expected. Also the signature caustics of Monte Carlo Path Tracing are present under both transmissive spheres.
Img. 3 - r3.ray
rmonte r3.ray i3.bmp 12 16 16 -verbose -russian
17640.078125 seconds @ 3072 Samples / Pixel
Img. 4 - r4.ray
rmonte r4.ray i4.bmp 12 16 16 -verbose -russian
9572.250000 seconds @ 3072 Samples / Pixel
Img. 7, Img. 8, Img. 9, and Img. 10 were all created to demonstrate the depth of field simulation implemented in my program. Img. 7 is the reference image. The center sphere is located farther from the camera than the left and right sphere. Both the left and right sphere are the same distance from the camera. In Img. 8 the focal plane is set to be at the distance of the center of the left and right spheres. As expected the center sphere is blurred. In Img. 9 the focal plane is set to be at the center of the center sphere. As expected the left and right spheres are blurred. In Img. 10 the focal plane was set to be in front of the objects so all of them would be blurred. I am not familiar with actual photography so I don't know how plausible a 20 and 50 meter focal length lens is or how how plausible a 32 or 42 aperture number is but the images do look blurred as they should based on where the focal plane is located.
Img. 7 - r7.ray
rmonte r7.ray i7.bmp 12 16 16 -verbose -russian
5936.750000 seconds @ 3072 Samples / Pixel
Img. 8 - r7.ray
rmonte r7.ray i8.bmp 12 16 16 -verbose
-russian -depthoffield 20 32
6448.250000 seconds @ 3072 Samples / Pixel
Img. 9 - r7.ray
rmonte r7.ray i9.bmp 12 16 16 -verbose
-russian -depthoffield 50 42
6497.484863 seconds @ 3072 Samples / Pixel
Img. 10 - r6.ray
rmonte r6.ray i7.bmp 12 16 16 -verbose
-russian -depthoffield 20 32
8029.984863 seconds @ 3072 Samples / Pixel
Img. 5 - r5.ray
rmonte r5.ray i5.bmp 12 16 16 -verbose -russian
7928.296875 seconds @ 3072 Samples / Pixel