
In this assignment you will implement a basic raytracer. There is a large amount of flexibility in how sophisticated you may make your program, but at a minimum your raytracer will be able to make images like the one on the right.
You are responsible for implementing the interesting parts of the raytracer, including the recursive raytracing functions, the shape intersection functions, the light and shadow computation functions, and any acceleration datastructures you wish. The rest of the infrastructure of the raytracer is provided in our skeleton code, which includes data structures for managing the ray traced objects, linear algebra functions (vector and matrix objects and operations), a function for loading scene graph files into a prescribed node tree stucture, a BMP image file importer/exporter (for images and textures, etc), and a couple of supporting data structures for lights, materials, etc. Be sure to read the overview of the included skeleton code for help.
The raytracer uses a special
.rayfile format to specify its scenes. Read the overview of the ray file syntax for details. To help you debug, we provide a standalone utility that can render.rayfiles. The utility is implemented in OpenGL and does not support recursive ray-casting and transparency, but at least for the first few parts of the assignment your image should roughly agree with the image generated by the viewer. The viewer is available for Linux, Mac and Windows architectures (you might need glut32.dll if you're using Windows). Note that rayviewer assumes that if you are looking at the front of the triangle the vertices are indexed in counter-clockwise order.We also provide several extra .ray files and an exporter for the 3D modeling program Blender. Check out the Blender .ray exporter page.
Matt Plough, 2005
You should use the code available here (2.tar.gz, 2.zip), as a starting point for your assignment. We provide you with:
ray.[cpp/h]: Code responsible for casting rays, calling intersection methods, computing colors, etc.shape.h: Abstract base class that all shapes must implement.
group.[cpp/h]: Shape subclass describing a scene-graph.rayFileInstance.[cpp/h]: Shape subclass describing the scene graph specified in a .ray file.triangle.[cpp/h]: Shape subclass describing a triangle.sphere.[cpp/h]: Shape subclass describing a sphere.cone.[cpp/h]: Shape subclass describing a cone.cylinder.[cpp/h]: Shape subclass describing a cylinder.box.[cpp/h]: Shape subclass describing a box.line.[cpp/h]: Shape subclass describing a line segment.light.h: Abstract base class that all lights must implement.
pointLight.[cpp/h]: Light subclass describing a point light.directionalLight.[cpp/h]: Light subclass describing a directional light.spotLight.[cpp/h]: Light subclass describing a spot light.main.cpp: This parses apart the command line arguments and invokes the raytracer.scene.[cpp/h]: Code for the classes that store environmental information, textures, materials, rayFiles, etc.geometry.[cpp/h]: Most of the code for the geometric manipulation you will need (matrix multiplication, vector addition, etc.)boundingBox.[cpp/h]: Code for defining bounding boxes.bmp.[cpp/h]: Code responsible for reading and writing BMP files.implemented.[cpp/h]: Code defining a global flag that specifies if unimplemented methods should announce themselves when they are invoked.RayFiles/: Directory containing a variety of .ray files.tracer.dsp: Visual C++ project file for Windows platforms.Makefile: Makefile suitable for UNIX platforms.rayviewer.Linux: A Linux-compiled ray-file viewer to look at .ray files.rayviewer.Darwin: A Mac OS X compiled ray-file viewer to look at .ray files.rayviewer.exe: A Windows-compiled ray-file viewer to look at .ray files. You may also need glut32.dll.As mentioned above, we provide extensive documentation for the provided files:
After you copy the provided files to your directory, the first thing to do is compile the program. If you are working on a Windows machine, double click on
tracer.dspand select build from the build menu. If you are developing on a UNIX machine, typemake. In either case an executable calledtracer(ortracer.exe) will be created.
The program takes in to mandatory arguments, the input (.ray) file name and the output file name (.bmp). It is invoked from the command line with:Additionally, you can specify image height, image width, recursion depth and contribution limit as follows:% tracer -src in.ray -dst out.bmpFeel free to add new arguments to deal with the new functionalities you are implementing. Just make sure they are documented.% tracer -src in.ray -dst out.bmp -width w -height h -rlim r -clim c
The following is a list of features that you may implement. We strongly advise that you implement the features roughly in the order they are described, and test your code after you implement each feature. This assignment will require a significant amount of programming, so it is very important to make sure that the basic parts of your raytracer are correct before moving on. We provide images of the correct output for the first few features.
The assignment is worth 20 points. The number in parentheses corresponds to how many points a feature is worth. Options in bold are mandatory.
- (1) Modify RayTrace (const char* fileName, int width, int height, int rLimit, float cLimit) (in
ray.[cpp/h]) to generate and cast rays from the camera's position through pixels to construct an image of the scene.
- (1) Implement the Group::intersect (Ray ray, IntersectionInfo& iInfo) (in
group.[cpp/h]) to cast rays through scene-graph nodes. For now ignore the local transformation and simply compute the intersection properties for the closest intersection within the list of Shapes associated to the Group.
- (1) Implement the Sphere::intersect(Ray ray, IntersectionInfo& iInfo) (in
sphere.[cpp/h]) method to compute ray intersections with a sphere.
After you implement these three steps, you should be able to generate this image with the following command:
tracer -src RayFiles/simple_sphere.ray -dst out.bmp
- (2) Implement the Triangle::intersect(Ray ray, IntersectionInfo& iInfo) (in
triangle.[cpp/h]) method to compute ray intersections with a triangle.
After you implement this step, you should be able to generate this image with the following command:
tracer -src RayFiles/simple_triangle.ray -dst out.bmp
- (1) Modify GetColor(Scene scene, Ray ray, IntersectionInfo iInfo, int rDepth, float cLimit) (in
ray.[cpp/h]) to return the color at the point of intersection using the ambient and emissive properties of the Material, and call this function in RayTrace(const char* fileName, int width, int height, int rLimit, floatcLimit) to compute the color at a point of intersection.
You should then be able to generate this image with:
tracer -src RayFiles/triangle_sphere.ray -dst out.bmp
- (1) To obtain the diffuse color contribution of the lights at the point of intersection, implement:
- PointLight::getDiffuse(Point3D cameraPosition, IntersectionInfo iInfo) (in
pointLight.[cpp/h]);- SpotLight::getDiffuse(Point3D cameraPosition, IntersectionInfo iInfo) (in
spotLight.[cpp/h]); and- DirectionalLight::getDiffuse(Point3D cameraPosition, IntersectionInfo iInfo) (in
directionalLight.[cpp/h]).Then modify GetColor(Scene scene, Ray ray, IntersectionInfo iInfo, int rDepth, float cLimit) (in
ray.[cpp/h]) so that the returned color takes into account the diffuse lighting component.
![]()
![]()
![]()
Diffuse lighting, with scenes from left to right: triangle_sphere_point.ray triangle_sphere_spot.ray triangle_sphere_direc.ray triangle_sphere_2.ray
- (1) To obtain the specular color contribution of the lights at the point of intersection, implement:
- PointLight::getSpecular(Point3D cameraPosition, IntersectionInfo iInfo) (in
pointLight.[cpp/h]);- SpotLight::getSpecular(Point3D cameraPosition, IntersectionInfo iInfo) (in
spotLight.[cpp/h]); and- DirectionalLight::getSpecular(Point3D cameraPosition, IntersectionInfo iInfo) (in
directionalLight.[cpp/h]).Again, modify GetColor(Scene scene, Ray ray, IntersectionInfo iInfo, int rDepth, float cLimit) (in
ray.[cpp/h]) so that the returned color takes into account the specular component.
![]()
![]()
![]()
The same four scenes as above, with specular highlights.
- (1) Implement:
- PointLight::isInShadow(IntersectionInfo iInfo, Shape* shape) (in
pointLight.[cpp/h])- SpotLight::isInShadow(IntersectionInfo iInfo, Shape* shape) (in
spotLight.[cpp/h])- DirectionalLight::isInShadow(IntersectionInfoiInfo, Shape* shape) (in
directionalLight.[cpp/h])Modify GetColor(Scene scene, Ray ray, IntersectionInfo iInfo, int rDepth, float cLimit) (in
ray.[cpp/h]) so that the diffuse and specular components of each light are excluded if the point is in shadow with respect to that light.
The same scene as above, with shadows.
- (2) Modify the implementation of GetColor(Scene scene, Ray ray, IntersectionInfo iInfo, int rDepth, float cLimit) (in
ray.[cpp/h]to recursively cast reflected rays at the point of intersection and add the reflected color contribution to returned color value.
The same scene, showing the reflection of the red triangle in the blue sphere.
- (1) Modify the implementation of Group::intersect(Ray ray, IntersectionInfo& iInfo) (in
group.[cpp/h]) to take into account the local transformation of the Group. (You can do this by using the transformation to convert the ray into object coordinates, computing the intersection, using the local transformation to convert intersection properties back into world coordinates, etc.). This is somewhat tricky because the transformation may not be distance-preserving, so you have to make sure that your normals are going in the right direction and that vectors you expect to be unit vectors are actually unit vectors.
The scene
triangle_sphere_3.rayshowing the effect of transforming the sphere and triangle.
- (1) Modify the implementation of GetColor(Scene scene, Ray ray,IntersectionInfo iInfo, int rDepth, float cLimit) (in
ray.[cpp/h]) to recursively cast refracted rays through the point of intersection and add the refracted color contribution to returned color value. (For now, you should ignore the refraction index.)
- (1) Implement a jittered supersampling scheme to reduce aliasing by casting multiple rays per pixel, randomly jittered about pixel centers, and averaging the radiance samples.
- (1) Implement the Box::intersect(Ray ray, IntersectionInfo& iInfo) (in
box.[cpp/h]) method to compute ray intersections with a box.
- (1) Implement the Cylinder::intersect(Ray ray, IntersectionInfo& iInfo) (in
cylinder.[cpp/h]) method to compute ray intersections with a cylinder.
- (1) Implement the Cone::intersect(Ray ray, IntersectionInfo& iInfo) (in
cone.[cpp/h]) method to compute ray intersections with a cone.
- (2) Accelerate ray intersection tests with hierarchical bounding boxes. To do this you will have to:
- Implement BoundingBox::BoundingBox(Point3D* pList, int listSize) constructor. (This will create a box containing the specified list of points.)
- Implement BoundingBox::operator+ (BoundingBox b) method. (This will return a bounding box which contains the union of the two bounding boxes.)
- Implement the BoundingBox::intersect(Ray ray) method. (This will return the distance along the ray to the nearest point of intersection with the bounding box.)
- Implement the BoundingBox::transform(Matrix m) method. (This will return the bounding box containing the transformed -- no longer axis aligned -- bounding box.)
- Implement the Shape::getBoundingBox(void) for each of the Shape subclasses that you have implemented. This method will have to return the bounding box for that shape. Additionally, when modifying Group::getBoundingBox(void) you will have to accumulate the bounding boxes of all the child Shapes, transform them, find the bounding box of the transformed bounding box, store that and return it. (Note: When the parser is done reading the .ray file it automatically calls the Shape::getBoundingBox(void) method for the root node, so that if you have implemented this method for all of the subclasses of Shape, the bounding boxes are already in place to be used for intersection queries, and you do not have to reset them.)
- Implement Group::intersect(Ray ray, IntersectionInfo& iInfo) to support testing ray intersection with the bounding box before testing for intersection with all child Shapes.
- Optimize the bounding box hierarchy so that when Group::intersect(Ray ray, IntersectionInfo& iInfo) is called, the Group checks all the bounding boxes first, chooses the one closest to the Ray, tests for intersection with the Shape corresponding to the bounding box and only tests those Shapes whose bounding box intersection is closer then the current closest intersection point.
- (2) Modify Triangle::intersect(Ray ray, IntersectionInfo& iInfo) (in
triangle.[cpp/h]) to return the texture coordinates at the point of intersection and modify GetColor(Scene scene, Ray ray, IntersectionInfo iInfo, int rDepth, float cLimit) to support texture mapping (with bilinear interpolation of texture samples).
- (1) Use the index of refraction and Snell's Law to calculate the correct direction of rays trasmitted through transparent surfaces and modify GetColor(Scene scene, Ray ray, IntersectionInfo iInfo, int rDepth, float cLimit) appropriately.
- (1) Treat point/spot lights as having a finite 'area' and cast a collection of rays during shadow checking to generate soft shadows. That is, if all shadows rays are blocked or unblocked we have zero or full lighting from the source in question just as before, but if a fraction of the shadow rays are blocked the light is only partially attentuated. Something randomized and/or adaptive scheme should be used to avoid banding.
- (1) Modify Sphere::intersect(Ray ray, IntersectionInfo& iInfo) (in sphere.[cpp/h]) to return the texture coordinates at the point of intersection (longitude and latitude) and modify GetColor(Scene scene, Ray ray, IntersectionInfo iInfo, int rDepth, float cLimit) to support texture mapping (with bilinear interpolation of texture samples).
- (1) Implement procedural texture mapping with Perlin noise functions to create 3-D solid wood, marble, etc.
- (1) Implement bump mapping for either or both texturing schemes.
- (1) Implement depth-of-field camera effects.
- (1) Simulate the behavior of a real camera lens by implementing the procedure in this SIGGRAPH paper
.- (1) Use a grid spatial datastructure to accelerate ray intersections.
- (2) Use an octree spatial datastructure to accelerate ray intersections.
- (2) Use a BSP spatial datastructure to accelerate ray intersections.
- (1) Add to your writeup a comparison of two spatial acceleration schemes.
- (?) Impress us with something we hadn't considered...
By implementing all the required features, you get 13 points. There are many ways to get more points:
- implementing the optional features listed above,
- (1) submitting 3D models you constructed,
- (1) submitting images for the art contest,
- (1) submitting a
.mpegmovie with a sequence of ray traced images resulting from a continuous camera path (e.g., use themakemoviecommand on the SGIs), and- (2) winning the art contest.
It is possible to get more than 20 points. However, as in the previous assignment, after 20 points, each point is divided by 2, and after 22 points, each point is divided by 4.
The following functions have not been completely implemented:
- RayTrace(const char* fileName, int width, int height, int rLimit, float cLimit) (in
ray.[cpp/h]);- GetColor(Scene scene, Ray ray, IntersectionInfo iInfo, int rDepth, float cLimit) (in
ray.[cpp/h]);- Sphere::intersect(Ray ray, IntersectionInfo& iInfo) (in sphere.[cpp/h])
- Sphere::GetBoundingBox(void)(in sphere.[cpp/h])
- Triangle::intersect(Ray ray, IntersectionInfo& iInfo) (in triangle.[cpp/h])
- Triangle::GetBoundingBox(void) (in triangle.[cpp/h])
- Group::intersect(Ray ray, IntersectionInfo& iInfo) (in group.[cpp/h])
- Group::GetBoundingBox(void) (in group.[cpp/h])
- Box::intersect(Ray ray, IntersectionInfo& iInfo) (in box.[cpp/h])
- Box::GetBoundingBox(void) (in box.[cpp/h])
- Cylinder::intersect(Ray ray, IntersectionInfo& iInfo) (in cylinder.[cpp/h])
- Cylinder::GetBoundingBox(void) (in cylinder.[cpp/h])
- Cone::intersect(Ray ray, IntersectionInfo& iInfo) (in cone.[cpp/h])
- Cone::GetBoundingBox(void) (in cone.[cpp/h])
- PointLight::getDiffuse(Point3D cameraPosition, IntersectionInfo iInfo) (in pointLight.[cpp/h])
- PointLight::getSpecular(Point3D cameraPosition, IntersectionInfo iInfo) (in pointLight.[cpp/h])
- PointLight::isInShadow(IntersectionInfo iInfo, Shape* shape) (in pointLight.[cpp/h])
- SpotLight::getDiffuse(Point3D cameraPosition, IntersectionInfo iInfo) (in spotLight.[cpp/h])
- SpotLight::getSpecular(Point3D cameraPosition, IntersectionInfo iInfo) (in spotLight.[cpp/h])
- SpotLight::isInShadow(IntersectionInfo iInfo, Shape* shape) (in spotLight.[cpp/h])
- DirectionalLight::getDiffuse(Point3D cameraPosition, IntersectionInfo iInfo) (in directionalLight.[cpp/h])
- DirectionalLight::getSpecular(Point3D cameraPosition, IntersectionInfo iInfo) (in directionalLight.[cpp/h])
- DirectionalLight::isInShadow(IntersectionInfo iInfo, Shape* shape) (in directionalLight.[cpp/h])
- BoundingBox::BoundingBox(Point3D* pList, int pSize) (in boundingBox.[cpp/h])
- BoundingBox::operator+ (BoundingBox b) (in boundingBox.[cpp/h])
- BoundingBox::transform(Matrix m) (in boundingBox.[cpp/h])
- BoundingBox::intersect(Ray ray) (in boundingBox.[cpp/h])
You should submit:
- the complete source code with a Makefile,
- any *.ray files you created (optional),
- the .mpeg movie for the movie feature (optional),
- the images for the art contest (optional), and
- a writeup.
The writeup should be a HTML document called assignment2.html which may include other documents or pictures. It should be brief, describing what you have implemented, what works and what doesn't, how you created the art contest images and/or movies, and any relevant instructions on how to run your interface.
Make sure the source code compiles on the machines in Friend 017. Always remember the late policy and the collaboration policy.
- Visit the POVRAY site, home of a popular freeware raytracer. Check out the links to the still competitions and animated competitions for inspiration
- To check that your rays are being cast in the right direction, you may want to modify your ray-casting code to write out a new .ray file with some of the cast rays displayed. You can do this by modifying your RayTrace ( const char* fileName, int width, int height, int rLimit, float cLimit) routine to look something like:
int i,j; Ray ray,r; FILE* fp; ... fp=fopen("temp.ray","w"); assert(fp); scene.write(fp); for(i=0;i<width;i++){ for(j=0;j<height;j++){ ray=Ray(...); ... if(i%20 ==0 & j%20==0){ Line(Ray(ray.p,ray(5.0)),scene.getMaterial(0)).write(0,fp); } ... } } fclose(fp); ...Then run
tracer -src RayFiles/simple_sphere.ray -dst out.bmpNow you can call
viewer -src temp.rayand after moving around a bit you will get an image that looks like this:
You should make sure that all of your rays are coming out of a single point and that they are all heading towards the sphere.
- What is the "contribution limit" and how do I use it?
The contribution limit is used to determine when the value of a color returned by casting secondary rays will be too small to be worth computing. Specifically, if you are casting specular (respectively transparent) rays, then before adding the color obtained by casting secondary rays, you will scale this color by the specular (respectively transparency) coefficient. Since the color coefficients must be between 0.0 and 1.0 the specular (respectively transparency) coefficient tells you in advance the upper bound on the brightness of the returned color. Thus if the specular (respectively transparency) contribution is less than the contribution limit you know that you do not need to send off secondary rays in the specular (respectively transparent) direction.- What exactly is the scene graph?
The scene graph is basically a Group, which is described here. It is just a linked list of shapes.- I implemented some of the optional features, such as supersampling? Should I add new command line parameters for these features?
Yes, by all means. Just remember to document them (in your writeup and in the program itself).- It seems that Box, Cylinder, and Cone are all defined to be axis-aligned. What if I want them in some arbitrary orientation?
Just create a Group with the appropriate transformation.- What if a ray hits the background?
Just return the background color.- How exactly do textures work? Do they replace other colors in the material?
No. The usual way to deal with textures is simply to multiply the texture color by the color the object would have if no texture were present. For example, assume your calculations (disregarding texture) determine that the color of a given pixel should be (0.75, 0.60, 1.00). If the texture color in that point is (0.80, 0.50, 0.75), the final color should be (0.60, 0.30, 0.75).- How do I transform normal vectors correctly?
In short, you use the Matrix.multNormal() function defined in geometry.cpp. For an explanation of what this function does, see the OpenGL Programming Guide Appendix F.