Princeton University
Computer Science Department

Computer Science 402
Artificial Intelligence

Alexandru Niculescu-Mizil

Fall 2013

General Information | Schedule & Readings | Assignments | blackboard

Schedule and readings

Numbers under the R&N column refer to chapters or sections of the Russell & Norvig text (3rd edition).  Other additional required or optional readings and links are also listed below.

This syllabus is constantly evolving as the semester progresses, so check back often (and let me know if it seems not to be up to date).





Readings (required)

Other (optional) readings and links



1 Th 9/12 General introduction to AI. 1 AI Growing Up by James Allen (but skip or skim page 19 to end). AAAI website with LOTS of readings on AI in general, AI in the news, etc.

Computing Machinery and Intelligence by Alan Turing.

2 Tu 9/17 Uninformed (blind) search 3.1-3.4    
3 Th 9/19 Informed (heuristic) search 3.5-3.6    
4 Tu 9/24 Local search;
Searching in games
5 (ok to skip 5.5-5.6)
  "The Chess Master and the Computer" by Garry Kasparov

Play checkers with Chinook

5 Th 9/26 Propositional logic 7.1-7.4  
6 Tu 10/1 Theorem proving and the resolution algorithm 7.5 handout on converting to CNF "The Logic Theory Machine" by Allen Newell and Herbert A. Simon (1956) -- the first AI paper on theorem proving.
7 Th 10/3 Practical methods of solving CNF sentences 7.6 Clause Learning in SAT by R. Tichy, T. Glase Heavy-Tailed Phenomena in Satis´Čüability and Constraint Satisfaction Problems by C. Gomes, B. Selman, N. Crato, H. Kautz
8 Tu 10/8 Applications of solving CNF sentences, including planning;
Cursory look at first-order logic;
Uncertainty and basics of probability
7.7; 10.1; 10.4.1;
8.1-8.3 (ok to skim);
9 Th 10/10 Independence and Bayes rule 13.4-13.5   "What is the chance of an earthquake?"  (article on interpreting probability, by Freedman & Stark)
10 Tu 10/15 Bayesian networks: semantics 14.1-14.3   "Introduction to probabilistic topic models" by David Blei brief tutorial on Bayes nets (and HMM's), with links for further reading
11 Th 10/17 Exact and Approximate inference with Bayesian networks 14.4-14.5    
Tu 10/22
Th 10/24
Tu 11/5
Uncertainty over time (temporal models; HMM's); Kalman filters 15.1-15.4 formal derivations
15 Th 11/7 Kalman Filters, DBN's, particle filters; 15.5  
16 Tu 11/12 DBNs; particle filtering;
begin decision theory; MDP
17.1   The basics of utility theory: 16.1-16.3 from R\&N;
Th 11/14
Tu 11/19
Markov decision processes: Bellman equations, value iteration, policy iteration 17.1-17.4.1 Sutton & Barto's excellent book on reinforcement learning and MDP's
Value Iteration Demo
19 Th 11/21 Machine Learning
Decision trees
20 Tu 11/26 Neural networks
Theory of learning
generalization error theorem proved in class A demo of LeNet, a neural network for optical-character recognition, is available here.  Click the links on the left to see how it does on various inputs.  The figure shows the activations of various layers of the network, where layer-1 is the deepest.  (For more detail, see the papers on the LeNet website, such as this one.)
original "Occam's Razor" paper
21 Tu 12/3 Theory of learning
Support-vector machines
tutorial on SVM's
22 Th 12/5 Bagging and Boosting 18.10 margins "movie" introductory chapter from Boosting: Foundations and Algorithms by Schapire & Freund
23 Tu 12/10 Clustering
Learning Bayes net and HMM parameters
24 Th 12/12 Reinforcement learning in MDP's 21.1-21.4   Sutton & Barto's excellent book on reinforcement learning and MDP's

Learning to play keepaway in robocup soccer using reinforcement learning.  Scroll down to find flash demos.