Stereopsis

In this problem, we will see how we can recover the 3D shape of an object given its image from two different viewpoints. In particular, given 2D point correspondences in the 2 images, we will estimate their 3D position in the scene and also estimate the camera positions and orientations. In this project, we have two pairs of images called house and library. Since the intrinsic parameters of a camera are nowadays known and stored when a picture is taken, we will use the calibration matrices K1 and K2 associated with the two images for both pairs of images. In general, these parameters also need to be estimated via calibration when they are unknown (but this is not going to be the focus of this particular project). The corresponding points for both pairs of images are precomputed. In general, the corresponding points are found by detecting interest points in images and then comparing their descriptors across images to find matches (there are many different ways of approaching this, but again, not this project). Given a pair of images and their corresponding points as well as the intrinsic parameters for the two cameras, we can mathematically estimate the 3D positions of the points as well as the camera matrices. In particular, initially we will have to compute the fundamental matrix. I did so using the 8-point algorithm described in the PDF linked below. Subsequently, we will estimate the extrinisic camera parameters of the second camera from the essential matrix. From all the possible combination of parameters, we will keep the one that results in most points in front of the two image planes. Then we can plot the 3D points as well as the two camera centers. The point clouds are displayed below.

The paper this project was based on a step by step implementation of: algorithms.pdf

Original House Images (Left and Right)

Point correspondences (precomputed)

Generated Point Clouds

Left image: point cloud from a viewpoint similar to that of the original image.

Right image: top view of the point cloud, to show geometric accuracy.

Original Library Images (Left and Right)

Point correspondences (precomputed)

Generated Point Clouds

Left image: point cloud from a viewpoint similar to that of the original image.

Right image: top view of the point cloud, to show geometric accuracy.