Understanding how eQTLs work by looking across eQTL studies, cell types, and regulatory element data
As part of the GTEx consortium, and in collaboration with Casey Brown, we have conducted large-scale replication studies across eleven studies in seven tissue types. We have overlaid these results onto regulatory element data to enable a much more profound mechanistic understanding of eQTL data by looking at where the eQTLs and also the cell type specific eQTLs are co-located with specific cis-regulatory elements.
We are currently developing statistical models for understanding eQTLs and variants that influence mRNA isoform levels in RNA-seq data. We are also working on predictive models for eQTLs across tissue types and models that consider replication in trans-eQTLs.
Untrusted cloud storage and social networks
The VELOCITY Compiler Project aims to address computer architecture problems with a new approach to compiler organization. This compiler organization, embodied in the VELOCITY Compiler (and derivative run-time optimizers), enables true whole-program scope, practical iterative compilation, and smarter memory analysis. These properties make VELOCITY better at extracting threads, improving reliability, and enhancing security.
The software toolchain includes static analyzers to check assertions about your program; optimizing compilers to translate your program to machine language; operating systems and libraries to supply context for your program. The Verified Software Toolchain project assures with machine-checked proofs that the assertions claimed at the top of the toolchain really hold in the machine-language program, running in the operating-system context.
We study vision science ‒ the computational principles underlying computer vision, robot vision, and human vision. We are interested in building computer systems that automatically understand visual scenes, both inferring the semantics and extracting 3D structure for a large variety of environments. Our research is also closely related to computer graphics, perception and cognition, cognitive neuroscience, machine learning, HCI, NLP and AI in general.
At the moment, we focus on leveraging Big 3D Data for Visual Scene Understanding (e.g. RGB-D sensors, CAD models, depth, multiple viewpoints, panoramic fields of view), to look for the right representations of visual scenes that realistically describe the world. We believe that it is critical to consider the role of a computer as an active explorer in a 3D world, and learn from rich 3D data that is close to the natural input that humans have.
WordNet is a resource used by researchers attempting to get computers to understand English (and any other language for which a WordNet exists).
Viewing a human language as a very large graph provides a theoretical framework for creating algorithms for understanding the meaning of words
in a text (e.g. determining if "fly" is an insect or a ball hit into left field) and translating documents between languages. We are working on both making WordNet more effective for these tasks and creating new approaches that use WordNet.