Computer Science 597B
This is a graduate course focused on research in theoretical aspects of deep learning. In recent years, deep learning has become the central paradigm of machine learning and related fields such as computer vision and natural language processing. But mathematical understanding for many aspects of this endeavor are still lacking. When and how fast does training succeed, and using how many examples? What are strengths and limitations of various architectures?
The course is geared towards graduate students in computer science and allied fields, (Knowledge of machine learning as well as algorithm design/analysis. Ideally, they would have taken at least one of COS 521 and COS 511.) Auditors are welcome, provided there is space in the class-room. We will prepare detailed notes on the lectures, and the plan is to convert them into a monograph. There will be many guest lecturers from the ongoing IAS special year on Optimization, Statistics and Machine Learning.
Enrolled students as well as auditors are expected to come to class regularly and participate in class discussion. Students who fail to do this will not get credit for the course.This course does not satisfy any undergrad requirements in the COS major (BSE or AB) and undergrads are not allowed to take this course for a grade.
Instructor: Sanjeev Arora- 407
Building - 609-258-3869 arora AT the domain name cs.princeton.edu
||Topic and main reading
||Basic framework, intro to optimization and
||Nonconvex landscapes. Generalized linear models, PCA etc.|
|| Thinking of GD as random walk in
landscape. Escaping saddle points. Langevin dynamics. Batch
size vs step size.
||Towards understanding generalization puzzle
(part 1): Infinitely wide deep nets and associated Neural
||Current ways to understand generalization of
finite but overparametrized nets. (+ their limitations)
||Possibly no lecture; instead attend IAS
workshop on theory of DL that week (as your schedule
|| Implicit regularization in the
||Understanding effect of Dropout
regularization + ??
||Variational auto-encoders, Reparametrization
trick. Generative Adversarial Nets and their limitations.
||Empirically successful tricks (eg Batch Norm,
Data Augmentation, etc.) and efforts to understand them.
||Implicit regularization and acceleration by
going deeper : understanding via dynamics of gradient
||Adversarial examples and approaches towards
certified defense. Min-max algorithms.
Please use this style file to scribe notes. Sample files are a source file and a compiled file.