Computer Science 598C
Professor: David Blei - 204 CS Building - 258-9907 email@example.com
Graduate Coordinator: Melissa Lawson - 310 CS Building - 258-5387 firstname.lastname@example.org
|Week 1|| Introduction to graphical models |
|Week 2|| Graphical models, posterior inference, hierarchical Bayesian models
Reading: Jordan, M. "Introduction to graphical models" Ch 2, 3
|Week 3|| Monte Carlo Markov chain sampling (MCMC)
[Neal, 1993] Neal, R. "Probabilistic inference using Markov Chain Monte Carlo methods" (read pp 1-69.)
|Week 4|| Monte Carlo Markov chain sampling (MCMC)
[Andrieu et al., 2003] Andrieu, C., de Freitas, N., Doucet, A., and Jordan, M. (2003). An introduction to MCMC for machine learning. Machine Learning, 50:5-43. (read pp 1-23.)
[Thomas et al., 2000] Thomas, A., Gutin, A., Abkevich, V., and Bansal, A. (2000). Multilocus linkage analysis by blocked Gibbs sampling. Statistics and Computing, 10:259-269.
[Mau et al., 1999] Mau, B., Newton, M., and Larget, B. (1999). Bayesian phylogenies via Markov Chain Monte Carlo methods. Biometrics, 55:1-12.
[Mau and Newton, 1997] Mau, B. and Newton, M. (1997). Phylogentic inference for binary data on dendrograms using Markov Chain Monte Carlo. Journal of Computational and Graphical Statistics, 6(1):122-131. (Optional.)
Jordan, M. "Introduction to graphical models" Ch 23. (Optional.)
|Week 5|| The Kalman filter
[Handout] Jordan, M. "Introduction to graphical models" Ch 11.
[Handout] Jordan, M. "Introduction to graphical models" Ch 15.
|Week 6|| Latent Dirichlet allocation
[Blei et al., 2003] Blei, D., Ng, A., and Jordan, M. Latent Dirichlet Allocation. Journal of Machine Learning Research. 3:993-1022.
|Week 7|| Variational methods
[Blei, 2004] Blei, D. "Graphical models and approximate posterior inference" (2004).
[Winn and Bishop, 2005] Winn, J. and Bishop, C. (2005). Variational message passing. Journal of Machine Learning Research. 6:661-694
|Week 8|| Advanced topics in variational methods
[Wainwright and Jordan, 2003a] Wainwright, M. and Jordan, M. "Variational inference in graphical models: The view from the marginal polytope" (2003).
[Wainwright and Jordan, 2003b] Wainwright, M. and Jordan, M. Graphical models, exponential families, and variational inference. (2003). (Read sections 1-6.)
|Week 9|| Belief propagation
[Yedida et al., 2004] Yedida, J., Freeman, W. and Weiss, Y. "Constructing Free Energy Approximations and Generalized Belief Propagation Algorithms" (2004).
[Dean, 2005] Dean, T. "Hierarchical Expectation Refinement for Learning Generative Perception Models" (2005).
|Week 10|| Advanced topics in variational methods (continued)
We continue to discuss Wainwright and Jordan (week 8) and Yedida et al. (week 9).
|Week 11|| Applications in language and graphics/vision
Lawrence, J., Ben-Artzi, A., DeCoro, C., Matusik, W., Hanspeter, P., Ramamoorthi, R., and Rusinkiewicz, S. "Inverse shade trees for non-parametric material representation and editing" (2006).
[Fine et al., 1998] Fine, S., Singer, Y., and Tishby, N. "The hierarchical hidden Markov model: Analysis and applications" (1998).
|Week 12||Summary and class presentations|
Probabilistic models are an indispensable tool to machine learning, with applications in information retrieval, computer vision, natural language processing, and bioinformatics. Under probabilistic models, our data are treated as a collection of random variables with a particular pattern of possible dependencies between them. Many statistics and machine learning methods can be cast as data analysis under probabilistic modeling assumptions.
Graphical models provide a succinct formalism with which to describe probabilistic models, and help unify our understanding of many algorithms. Using graphical models facilitates developing new algorithms and generalizing classical algorithms to new applications.
The central computational problem of graphical models is probabilistic inference, computing the distribution of a set of random variables conditional on the values of another set of variables. For some models, such as hidden Markov models (HMMs), mixture models, and Kalman filters, the inference problem can be solved in polynomial time. For many models of interest, however, inference is intractable. Such models include factorial HMMs (from genetics), topic models (from document analysis), deep belief networks (from neuroscience), and hierarchical Bayesian models (from Bayesian statistics).
In these settings, practitioners must avail themselves of approximate inference. By accepting an approximate solution , we can express complicated probabilistic assumptions about our data without worrying about the computational barriers that arise from exact inference. In this seminar we will study the state of the art in approximate inference techniques. We will focus on simulation methods such as Monte Carlo Markov chain sampling (MCMC), optimization-based methods such as mean field variational inference, and message passing methods such as belief propagation. We will read research papers and book excerpts, with the goal of understanding the capabilities and limitations of these algorithms.
There are three course requirements: