Princeton University

Computer Science 511


You can choose to do readings from either the "textbook track" or the "alternative track" (or both).
Numbers in square brackets (e.g. [3]) under "textbook track" refer to chapters or sections of the Mohri et al. textbook.
Some of the readings are available through blackboard (after logging on, click on "ereserves"). In particular, numbers in braces (e.g. {3}) under "alternative track" refer to chapters or sections of the Kearns & Vazirani textbook ("An Introduction to Computational Learning Theory"), available through ereserves. If you are not registered for the course but want to access these readings, let me know so that we can arrange guest access to blackboard.
Some of these readings can only be accessed when using the Princeton intranet. (See this link for more details, if connecting remotely.)
Note that this schedule is continually being updated as the semester progresses, so check back often. (If I seem to be falling behind in keeping this uptodate, please send me a reminder.)
# 
Date 
Topic 
Core reading 
Other (optional) readings and links 

Textbook track  Alternative track  
1  Tu 2/4  General introduction to machine learning; consistency model  [1]  
2  Th 2/6  Consistency model; review of probability; begin PAC model  [2.1]  for a review of probability, see (for
instance): [Appendix C] from Mohri et al. (containing more material than is actually needed in this course) OR: Appendix C.2 and C.3 in "Introduction to Algorithms" by Cormen, Leiserson, Rivest and Stein (on ereserves). 

3  Tu 2/11  PAC model; simple special cases; begin Occam's razor  [2.2]  {1}, {2.12.2}  Valiant's original "Theory of the Learnable" paper, which first got theoretical computer scientists working on machine learning 
4  Th 2/13  Prove Occam's razor; begin sample complexity for infinite hypothesis spaces; growth function  {3.1}  original "Occam's Razor" paper  
5  Tu 2/18  sample complexity for infinite hypothesis spaces; VCdimension  [3.3]  {3.5}  Vapnik and Chervonenkis's original paper "On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities" 
6  Th 2/20  Sauer's lemma; upper bound on sample complexity; lower bounds on sample complexity  [3.4]  {3.23.4}, {3.6}  
7  Tu 2/25  Finish lower bound; handling inconsistent hypotheses; begin Chernoff bounds  [2.3]  
8  Th 2/27  Finish Chernoff bounds; McDiarmid's inequality; error bounds for inconsistent hypotheses; overfitting  [D.1D.2]  "Probability Inequalities for Sums of Bounded Random Variables" by Wassily Hoeffding (first four sections)  
9  Tu 3/4  Rademacher complexity  [3.13.2]  Section 3 of "Theory of Classification: A Survey of Some Recent Advances" by Boucheron, Bousquet, Lugosi  
10  Th 3/6  Finish Rademacher complexity; begin boosting 
[6.16.2] (okay to skip or skim 6.2.2 and 6.2.3)  Chapter 1 and Sections 3.03.1 of Boosting: Foundations and Algorithms by Schapire & Freund  
11  Tu 3/11  Boosting  [6.3.16.3.3]  Sections 4.04.1 and 5.05.4.1 (except 5.2.25.2.3) of Boosting: Foundations and Algorithms.  
12  Th 3/13  Finish boosting  
13  Tu 3/25  Supportvector machines  [4]; [5.15.3] (okay to skip or skim the many parts of these chapters that we are not covering)  Sections 5.45.8 of The Nature of Statistical Learning Theory by Vapnik.  tutorial on SVM's 
14  Th 3/27  Finish supportvector machines; online learning; learning with expert advice  [7.17.2.1]  Sections 1 and 2 of "The Weighted Majority Algorithm" by Littlestone and Warmuth  "mindreading" game based on online learning 
15  Tu 4/1  Weighted majority algorithm  [7.2.27.2.3]  Sections 5 and 6 of "The Weighted Majority Algorithm" by Littlestone and Warmuth  
16  Th 4/3  Perceptron algorithm; winnow  [7.3]  (see textbook track)  
17  Tu 4/8  regression; linear regression; begin WidrowHoff algorithm  [10.1; 10.3.1;10.3.6]  Sections 15 of "Exponentiated Gradient versus Gradient Descent for Linear Predictors" by Kivinen and Warmuth  
18  Th 4/10  finish WidrowHoff; EG; mirror descent; converting online to batch  [7.4]  (see textbook track)  
19  Tu 4/15  modeling probability distributions; maximum likelihood; maximum entropy modeling of distributions  The duality theorem given in class is taken
from: Stephen Della Pietra, Vincent Della Pietra and John Lafferty. 
More readings, software, etc. on maximum entropy are available here (although a bit out of date).  
20  Th 4/17  finish maxent; begin online log loss  
21  Tu 4/22  online log loss and universal compression; Bayes algorithm  
22  Th 4/24  shifting experts  Mark Herbster and Manfred K. Warmuth. Tracking the Best Expert. Machine Learning, 32(2):151178, August, 1998. 

23  Tu 4/29  portfolio selection  Avrim Blum and Adam Kalai. Universal Portfolios With and Without Transaction Costs. Machine Learning, 35:193205, 1999. 
For a view that says this is not the way to do
things, see this short piece (and try to guess what is weird in how he wrote
it):
Paul A. Samuelson 

24  Th 5/1  game theory and learning  Yoav Freund and Robert E. Schapire. Game theory, online prediction and boosting. In Proceedings of the Ninth Annual Conference on Computational Learning Theory, pages 325332, 1996. 
For more details, see: Yoav Freund and Robert
E. Schapire. OR: Chapter 6 of Boosting: Foundations and Algorithms by Schapire & Freund 