Online convex optimization

In recent years, convex optimization and the notion of regret minimization in games, have been combined and applied to machine learning in a general framework called online convex optimization. For more information see the survey on the convex optimization approach to regret minimization, or this draft of a course book on online convex optimization in machine learning.

Sublinear optimization

In many modern optimization problems, particularly those arising in machine learning, the amount data is too large to apply standard convex optimization methods. We develop new optimization algorithms that make use of randomization to prune the data, and produce a correct solution albeit running in time which is smaller than the data representation. Such sublinear-time algorithms are applied to linear classification, training support vector machines, semidefinite optimization and to other problems. These new algorithms are based on a primal-dual approach. They use a combination of novel sampling techniques and the randomized implementation of online learning algorithms. Results are often accompanied by information-theoretic lower bounds that show running times to be nearly best possible in the unit-cost RAM model.

Projection free learning

The computational bottleneck in applying state-of-the-art iterative methods to ML/optimization is often the so-called "projection step". We design projection-free optimization algorithms that replace projections by more efficient linear optimization steps. Recent results include a projection-free algorithm for online learning and the first linearly convergent projection-free algorithm..

Learning with partial feedback

Numerous decision problems reveal noisy, corrupt, or otherwise, partial feedback on choices. The Bandit Convex Optimization framework is a general methodology for tackling such missing-feedback problems, including the famed multi-armed-bandit problem and its generalizations. Applications include online routing, rank aggregation, matrix completion and more. Our research addresses efficient algorithms for bandit convex optimization, both in the form of general methodologies for exploration, as well as new techniques from optimization, and variance exploitation.