Interests: Theoretical foundations of machine learning, design and analysis of efficient algorithms for machine learning and mathematical optimization.
Active Research Projects:
- Learning with partial feedback
- Online Convex Optimization
- Projection free learning
- Sublinear Optimization
Elad Hazan is a professor of computer science at Princeton university. He joined in 2015 from the Technion, where he had been an associate professor of operations research. His research focuses on the design and analysis of algorithms for basic problems in machine learning and optimization. Amongst his contributions are the co-development of the AdaGrad algorithm for training learning machines, and the first sublinear-time algorithms for convex optimization. He is the recipient of (twice) the IBM Goldberg best paper award in 2012 for contributions to sublinear time algorithms for machine learning, and in 2008 for decision making under uncertainty, a European Research Council grant , a Marie Curie fellowship and a Google Research Award (twice). He serves on the steering committee of the Association for Computational Learning and has been program chair for COLT 2015.
- Sublinear Optimization for Machine Learning. (with K. Clarkson and D. Woodruff)
Journal of the ACM (JACM), Volume 59 Issue 5, October 2012.
- Playing Non-linear Games with Linear Oracles (with D. Garber)
54th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2013)
- Interior-Point Methods for Full-Information and Bandit Online Learning. (with Jacob Abernethy and Alexander Rakhlin)
IEEE Transactions on Information Theory 58(7): 4164-4175 (2012)
- Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. (with J. Duchi and Y. Singer)
Journal of Machine Learning Research (JMLR) Volume 12, 2/1/2011 Pages 2121-2159
- Logarithmic Regret Algorithms for Online Convex Optimization. (with Amit Agarwal and Satyen Kale)
Machine Learning Journal Volume 69 , Issue 2-3 Pages: 169 - 192, December 2007