Princeton University
Computer Science Department

Computer Science 511
Theoretical Machine Learning

Spring 2018

 


Directory
General Information | Schedule & Readings | Assignments | Final Project | blackboard

Schedule and readings

You can choose to do readings from either the "textbook track" or the "eclectic track" (or both).

Numbers in square brackets (e.g. [3]) under "textbook track" refer to chapters or sections of the Mohri et al. textbook.

Some of the readings are available through blackboard (after logging on, click on "e-reserves").  In particular, numbers in braces (e.g. {3}) under "eclectic track" refer to chapters or sections of the Kearns & Vazirani textbook ("An Introduction to Computational Learning Theory"), available through e-reserves.  If you are not registered for the course but want to access these readings, let me know so that we can arrange guest access to blackboard.

Some of these readings can only be accessed when using the Princeton intranet.  (See this link for more details, if connecting remotely.)

Note that this schedule is continually being updated as the semester progresses, so check back often.  (If I seem to be falling behind in keeping this up-to-date, please send me a reminder.)

#

Date

Topic

Core reading

Other (optional) readings and links

Textbook track Eclectic track
1 M 2/5 General introduction to machine learning; consistency model [1]

scribe notes

2 W 2/7 Consistency model; review of probability; begin PAC model [2.1]   for a review of probability, see (for instance):

[Appendix C] from Mohri et al. (containing more material than is actually needed in this course)

OR:  Appendix C.2 and C.3 in "Introduction to Algorithms" by Cormen, Leiserson, Rivest and Stein.

scribe notes

3 M 2/12 PAC model; simple special cases; begin Occam's razor [2.2] {1}, {2.1-2.2} Valiant's original "Theory of the Learnable" paper, which first got theoretical computer scientists working on machine learning

scribe notes

4 W 2/14 Prove Occam's razor; begin sample complexity for infinite hypothesis spaces; growth function {3.1} original "Occam's Razor" paper

scribe notes

5 M 2/19 Sample complexity for infinite hypothesis spaces [3.3] {3.5} Vapnik and Chervonenkis's original paper "On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities"

The classic that introduced Vapnik and Chervonenkis's work to the learning theory community: "Learnability and the Vapnik-Chervonenkis dimension"

scribe notes

6 W 2/21 VC-dimension; Sauer's lemma [3.4] {3.2-3.4}, {3.6}

scribe notes

7 M 2/26 Lower bounds on sample complexity; handling inconsistent hypotheses; begin Chernoff bounds [2.3]

scribe notes

8 W 2/28 Finish Chernoff bounds; McDiarmid's inequality; error bounds for inconsistent hypotheses; overfitting [D.1-D.2] "Probability Inequalities for Sums of Bounded Random Variables" by Wassily Hoeffding (first four sections)

scribe notes

9 M 3/5 Rademacher complexity [3.1-3.2] Section 3 of "Theory of Classification: A Survey of Some Recent Advances" by Boucheron, Bousquet, Lugosi

scribe notes

10 W 3/7 Finish Rademacher complexity;
begin boosting
[6.1] Chapter 1 of Boosting: Foundations and Algorithms by Schapire & Freund

scribe notes

11 M 3/12 Boosting [6.2] (okay to skip or skim 6.2.2 and 6.2.3) Sections 3.0-3.1 and 4.0-4.1 of Boosting: Foundations and Algorithms.

slides (toy example)
slides (learning curves)

scribe notes

12 W 3/14 Finish boosting [6.3.1-6.3.3] Sections 5.0-5.4.1 (except 5.2.2-5.2.3) of Boosting: Foundations and Algorithms. A high-level overview of some of the various approaches that have attempted to explain AdaBoost as a learning method: "Explaining AdaBoost" [by Schapire, from Empirical Inference: Festschrift in Honor of Vladimir N. Vapnik]

margins "movie"

scribe notes

13 M 3/26 Support-vector machines [4]; [5.1-5.3] (okay to skip or skim the many parts of these chapters that we are not covering) Sections 5.4-5.8 of The Nature of Statistical Learning Theory by Vapnik. tutorial on SVM's
 

scribe notes

14 W 3/28 Finish support-vector machines; online learning; learning with expert advice [7.1-7.2.1] Sections 1 and 2 of "The Weighted Majority Algorithm" by Littlestone and Warmuth A comparison of boosting and SVM's is given in Section 5.6 of Boosting: Foundations and Algorithms
 

scribe notes

15 M 4/2 Weighted majority algorithm [7.2.2-7.2.3] Sections 5 and 6 of "The Weighted Majority Algorithm" by Littlestone and Warmuth A "mind-reading" game based on online learning
 

scribe notes

16 W 4/4 Perceptron algorithm; winnow [7.3] (see textbook track)
 

scribe notes

17 M 4/9 regression; linear regression; begin Widrow-Hoff algorithm [10.1; 10.3.1;10.3.6] Sections 1-5 of "Exponentiated Gradient versus Gradient Descent for Linear Predictors" by Kivinen and Warmuth  
 

scribe notes

18 W 4/11 finish Widrow-Hoff; EG; mirror descent; converting on-line to batch [7.4] (see textbook track)
 

scribe notes

19 M 4/16 modeling probability distributions; maximum likelihood; maximum entropy modeling of distributions The duality theorem given in class is taken from:

Stephen Della Pietra, Vincent Della Pietra and John Lafferty.
Inducing features of random fields.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4):380-393, April, 1997.
pdf or pdf

scribe notes

More readings, software, etc. on maximum entropy are available here (although a bit out of date).

Maxent (software and readings) for modeling species distributions is available here.

20 W 4/18 finish maxent; begin on-line log loss

scribe notes

21 M 4/23 on-line log loss and universal compression; Bayes algorithm

scribe notes

22 W 4/25  shifting experts Mark Herbster and Manfred K. Warmuth.
Tracking the Best Expert.
Machine Learning, 32(2):151-178, August, 1998.
pdf

scribe notes

23 M 4/30 portfolio selection Avrim Blum and Adam Kalai.
Universal Portfolios With and Without Transaction Costs.
Machine Learning, 35:193-205, 1999.
pdf

scribe notes

For a view that says this is not the way to do things, see this short piece (and try to guess what is weird in how he wrote it):

Paul A. Samuelson
Why we should not make mean log of wealth big though years to act are long.
Journal of Banking and Finance, 3:305-307, 1979.
pdf

24 W 5/2 game theory and learning Yoav Freund and Robert E. Schapire.
Game theory, on-line prediction and boosting.
In Proceedings of the Ninth Annual Conference on Computational Learning Theory, pages 325-332, 1996.
pdf

scribe notes

For more details, see:

Yoav Freund and Robert E. Schapire. Adaptive game playing using multiplicative weights. Games and Economic Behavior, 29:79-103, 1999.
pdf

OR: Chapter 6 of Boosting: Foundations and Algorithms.