I am a research scientist in the Creative Technologies Laboratory at Adobe. Before that, I was a postdoc working with Prof. Andrew Gelman in the Statistics Department at Columbia University. I did my Ph.D. at Princeton University in Computer Science working in the Sound Lab with Prof. Perry Cook and Prof. David Blei. My research interests include developing efficient Bayesian (and pseudo-Bayesian) inference algorithms; hierarchical probabilistic modeling of audio, text, and marketing data; audio feature extraction, music information retrieval, and the application of music information retrieval and modeling techniques to musical synthesis.
You can contact me by email at mdhoffma at cs [dot] princeton [dot] edu.
I have published papers in a number of peer-reviewed conference proceedings. A couple of them have even won awards! Most of my papers can be downloaded on my publications page.
In Spring 2009, I taught Advanced Digital Signal Theory at NYU. I have also been a teaching assistant for a few classes in Princeton's CS department.
Below are some links to some code I've written that you may or may not find useful. If you have trouble getting it to work, then feel free to email me and I'll try to get back to you with a helpful response in a timely fashion. (But no promises.)
Here is a Matlab implementation of the No-U-Turn Sampler (NUTS), along with an implementation of Hamiltonian Monte Carlo (HMC) that uses a dual averaging algorithm to tune the step size parameter. These algorithms are also implemented in the automatic Bayesian inference engine Stan. The algorithms are described in my paper "The No-U-Turn Sampler: Automatically adapting path lengths in Hamiltonian Monte Carlo". UPDATED January 2012: there was a serious bug in the old version of nuts.m, which is now fixed.
Here is some Python code implementing online inference for the Latent Dirichlet Allocation probabilistic topic model. You may also want to check out the super-efficient (but less readable) implementation in Vowpal Wabbit. The algorithm is described in my NIPS 2010 paper.
Here is some MATLAB code implementing variational inference for my Gamma Process Nonnegative Matrix Factorization (GaP-NMF) model (which you can read about in my ICML paper here).
I've put up some MATLAB code implementing my Codeword Bernoulli Average (CBA) model for automatically tagging songs from audio (see my ISMIR paper). You can download it here. cbascript.m demonstrates the process of vector-quantization, parameter inference, and generalizing to new songs. UPDATED July 2010: should be exactly equivalent to the old version, but uses much less memory and is about 100 times faster.
A while ago, I released FeatSynth, a C++ framework for doing feature-based synthesis. Unfortunately, I got distracted by other research and never implemented enough synthesis and feature extraction modules to make it as useful as it could be, but feel free to check it out if you like.
You can download my CV (as of November 2011) by clicking on this link: CV.