I have published ten peer-reviewed computer graphics papers and two book chapters, in addition to three papers in other fields. Much of my earlier work, including that for which I received my Master's degree, concentrated on the field of mesh decimation (also known as mesh simplification). Decimation algorithms accept as input a high-resolution triangle mesh, and produce as output a mesh with a reduced number of triangles (often a few percent of the original) that approximates the original. The time required to display a mesh is proportional to the number of triangles; therefore computers will be able to display such simplified meshes much faster than the originals. This is important with the increasing use of 3D scanners to generate meshes from physical objects; often these meshes are in signficantly larger detail than necessary.
My later work (including my Ph.D. thesis) concentrated on rendering, which is the term for the process of generating a final image from a given description of the world. In particular, I was interested in algorithms to allow a designer to exercise creative control over the appearance and detail of the rendered scene, especially with respect to shadows and global illumination.
As a “detour and frolic,” I collaborated with Zafer Barutçuoglu to apply a method (which we termed “Bayesian Aggregation”) for hierarchical classification of datasets. We start with a labelled hierarchical example dataset – for example one of 3D models that are labeled “Animal,” “Bird,” “Eagle,” “Duck,” etc. Then, given a new unlabeled example, we predict which classes this example should be classified as. Our contribution is to do so in a way that takes advantage of the hierarchy information (for example, if an object is an Eagle and an Animal, it must also be a Bird). We applied this to shapes, music genres, and protein function, significantly improving overall accuracy.