Hi, I'm Aaron, I recently defended my PhD thesis at Princeton University, advised by Mike Freedman. My research focuses on exploiting regularity in the structure of modern web applications to improve different aspects of those applications. Generally speaking, I am interested in distributed systems, security and information privacy.
If you are interested in reading my thesis, it is available here. It looks at how we can automatically provide some measures of security through more intelligent and secure web frameworks and then provide increased performance through better caching for web applications. The insight in this work is that today's web applications are typically implemented with an implicit structure, and this structure can be exploited by frameworks for better security (see the Passe project) on the one hand, and better performance on the other (see Hyperbolic Caching).
I am moving to my role as an engineer at Blockstack, where I am helping to develop the tools needed for a re-decentralized web.
Previously, I attended MIT and received a pair of bachelor's degrees. I received my M.Eng in 2011 advised by Barbara Liskov while working in the Programming Methodology Group on Distributed Information Flow Control and Secure Audit Trail Analysis. My thesis was titled Analyzing Audit Trails in the Aeolus Security Platform.
My current work investigates how web applications' data access patterns can inform cache eviction strategies. In particular, by incorporating varying decay rates for different items in the cache, the eviction strategy is able to perform better than prior strategies on web-like workloads. However, these varying decays introduce reorderings that are not easily implemented using classical queue structures, and so this work requires new approaches to managing item priorities. I spent the summer of 2015 at MSR New York working on this project with Sid Sen.
A paper on this work is to appear at USENIX ATC 2017. A pre-print version is available here.
Passe explores alternative programming models and mechanisms for securing application data. Many modern applications rely on a centralized and logically separate shared data store. By ensuring that only "normal" queries can execute on this data store, certain confidentiality and integrity guarantees can be provided for application data even while the application itself is compromised by attackers. This model allows application programmers to use already familiar interfaces without any explicit security specifications. Developers need only execute our analysis tool on their applications. I made a poster for this work which may serve as a handy infographic for those interested.
A paper on this work appeared at the 2014 IEEE Symposium on Security and Privacy.
As a research intern at MSR Cambridge with Miguel Castro and Manuel Costa in the summers of 2013 and 2014, I helped investigate the impact of garbage collection on performance sensitive code. In systems with large memory capacity, high memory utilization, and small objects, modern GCs can introduce application stalls on the order of tens of seconds, which is unacceptable for systems applications such as key-value stores and databases. However, GCs are relied on to provide temporal memory safety. In my internship, I helped developed a prototype variant of C# with manual memory management and temporal memory safety achieved with compiler-inserted checks.
This work contributed to a paper which will appear at PLDI 2017.