Quick links

Sparse Sampling Algorithms for Probabilistic Artificial Intelligence

Date and Time
Wednesday, February 27, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Michael Kearns, from University of Pennsylvania
Host
Robert Tarjan
A powerful theme from computational learning theory is uniform convergence: the fact that a large or infinite class C of random variables can be simultaneously estimated from a sample whose size is just logarithmic in the size of C, or dependent only on combinatorial parameters such as its VC dimension. The application of this tool in learning theory has primarily been to supervised learning (classification and regression), and has largely been descriptive (for example, to model the generalization ability of existing learning algorithms), rather than prescriptive.

Over the past several years, we have been systematically exploring the application of this fundamental notion to a broader set of natural problems in probabilistic artificial intelligence, and have found a number of striking examples where it leads to novel algorithms and analyses for core problems. In particular, by applying uniform convergence arguments to carefully structured but very sparse samples in probabilistic environments, we obtain:

* A new algorithm providing a near-optimal solution to the exploration-exploitation trade-off in Markov decision processes, a basic problem in reinforcement learning;

* A sparse sampling algorithm for planning in Markov decision processes whose running time has no dependence on the number of states;

* Powerful new algorithms for probabilistic inference in Bayesian networks:

*Algorithms for computing approximate Nash equilibria in large-population games.

These are only a few of the many algorithmic applications of learning theory methods in modern AI. This talk will present a self contained survey of these developments.

Follow us: Facebook Twitter Linkedin