COS 511
Theoretical Machine Learning
Elad Hazan

Spring 2015


Course Summary

Can the mechanism of learning be automated and implemented by a machine?
In this course we formally define and study various models that have been proposed for learning. The course will present and contrast the statistical, computational and game-theoretic models for learning. We will present and rigorously analyze some of the most successful algorithms in machine learning that are extensively used today. Likely topics include: intro to statistical learning theory and bounds on the number of random examples needed to learn; learning in adversarial settings and the on-line learning model; using convex optimization to model and solve learning problems; learning with partial observability; how to boost the accuracy of a weak learning algorithm; universal portfolio selection; learning in games.


Administrative Information

Lectures: Tue 13:30-16:20 in CS building, room 105.

Professor: Elad Hazan, CS building 407, Reception hours: Tue 11-12, or by appointment.

Teaching Assistant: Kevin Lai, CS building 003, Reception hours: Mon 15:30-16:30, Wed 15:30-16:30

Requirements: This is a graduate-level course that requires mathematical background. Recommended background courses: probability, discrete math, calculus, analysis, linear algebra, algorithms and data structures, theory of computation / complexity theory, linear programming.

Attendance and the use of electronic devices: Attendance is expected at all lectures. The use of laptops and similar devices for note-taking is permitted.


Textbook and readings

Textbooks:
1. An Introduction To Computational Learning Theory, by M.J. Kearns and U. Vazirani
2. Prediction, Learning and Games, by N. Cesa-Bianchi and G. Lugosi
3. Understanding Machine Learning: From Theory to Algorithms, by Shai Shalev-Shwartz and Shai Ben-David
4. Boosting: Foundations and Algorithms, by R. E. Schapire and Y. Freund
5. (draft) Introduction to Online Convex Optimization, by E. Hazan, available here
6. Foundations of Machine Learning, by Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar

We'll post additional optional reading as the course progresses.


Grading and collaboration

Grading: homework assignments (65%) and a final project (35%). There will be 6-8 homework assignments, which will be given roughly every week or two, and will each consist of a small number of problems (optional programming). Credit for all assignments is the same. Details of the final project will be given towards the end of the course.
Turning in assignments and late policy: Coordinate submission of assignments with the TA. Every late day reduces the attainable credit for the excercise by 10%.
Scribe notes: These are optional, but are graded and give extra credit equal to one additional exercise. Every student that is interested will have the option of scribing.
Collaboration: Collaboration on the problem sets is allowed. Final project should be done independently. The people you collaborated with on assignments should be clearly detailed: before the solution to each question, list all people that you collaborated with on that particlar question.


Lecture notes and schedule

Student taking or auditing the course can volunteer to write scribe notes for a lecture using the latex system. Latex tutorials: tutorial 1 and tutorial 2.
Relevant style files appear here and definitions here. Scribes should strive for good mathematical style.  A few pages from Mathematical Writing by by Knuth, Larrabee, and Roberts provide some good hints.


Topic Lecture notes Extra reading Problem sets
  Introduction to Learning Theory
  Statistical Learning model      
  lecture1     chapters 1-2 in books 1-3  
  PAC learning
  start VC-theory      
  lecture2         hw1  
finish VC-theory, efficient linear classification         lecture 3 part1  
  lecture 3 part 2  
      hw2  
special guest talk (the juggling professor) +  
+ decision making, multiplicative updates    
  lecture4-Jake
  part 2 = chapter1  
  MW survey      
online convex optimization, convex modelling       chapter 2-3 (5)  
  lecture 5 (unedited)
  Zinkevich's paper     hw3  
portfolio selection, second order methods       chapter 4 (5)  
  lecture 6 (unedited)
  Cover's paper
simpler analysis here  
  hw4  
regularization, perturbation and stability       chapter 5 (5)  
  lecture 7 (unedited)
  Kalai and Vempala's paper
  Hannan's original paper  
  none this week  
online 2 batch, bandits       chapters 6,9 (5)
  lecture 8 (unedited)  
  Flaxman, Kalai, Mcmahan
  competing in the dark  
  hw5  
special guest lecture: Prof. Arora       lecture 9 (unedited)
 
  Prof. Arora's course    
bandits cont. , Boosting       chapters 6,10 (5)
  lecture 10 (unedited)  
  Boosting: Foundations and Algorithms, by R. E. Schapire and Y. Freund
  Schapire survey  
  hw6  
Games, Duality and Regret       chapter 8 (5)
  lecture 11 (unedited)  
  convergence of regret minimization dynamics in concave games      
Computational aspects of learning       research paper
  lecture 12 (unedited)  
  Blum's hardness result