Princeton University 
Computer Science 402 

Numbers under the R&N column refer to chapters or sections of the Russell & Norvig text (3rd edition). Other additional required or optional readings and links are also listed below.
This syllabus is constantly evolving as the semester progresses, so check back often (and let me know if it seems not to be up to date).
# 
Date 
Topic 
Readings (required) 
Other (optional) readings and links 

R&N 
other 

1  Th 9/12  General introduction to AI.  1  AI Growing Up by James Allen (but skip or skim page 19 to end). 
AAAI website with
LOTS of readings on AI in general, AI in the news, etc. Computing Machinery and Intelligence by Alan Turing. 

2  Tu 9/17  Uninformed (blind) search  3.13.4  
3  Th 9/19  Informed (heuristic) search  3.53.6  
4  Tu 9/24 
Local search; Searching in games 
4.1 5 (ok to skip 5.55.6) 
"The Chess Master and the Computer" by Garry Kasparov Play checkers with Chinook 

5  Th 9/26  Propositional logic  7.17.4  
6  Tu 10/1  Theorem proving and the resolution algorithm  7.5  handout on converting to CNF  "The Logic Theory Machine" by Allen Newell and Herbert A. Simon (1956)  the first AI paper on theorem proving.  
7  Th 10/3  Practical methods of solving CNF sentences  7.6  Clause Learning in SAT by R. Tichy, T. Glase  HeavyTailed Phenomena in Satisﬁability and Constraint Satisfaction Problems by C. Gomes, B. Selman, N. Crato, H. Kautz  
8  Tu 10/8 
Applications of solving CNF sentences, including planning; Cursory look at firstorder logic; Uncertainty and basics of probability 
7.7; 10.1; 10.4.1; 8.18.3 (ok to skim); 13.113.3 

9  Th 10/10  Independence and Bayes rule  13.413.5  "What is the chance of an earthquake?" (article on interpreting probability, by Freedman & Stark)  
10  Tu 10/15  Bayesian networks: semantics  14.114.3  "Introduction to probabilistic topic models" by David Blei  brief tutorial on Bayes nets (and HMM's), with links for further reading  
11  Th 10/17  Exact and Approximate inference with Bayesian networks  14.414.5  
12 13 14 
Tu 10/22 Th 10/24 Tu 11/5 
Uncertainty over time (temporal models; HMM's); Kalman filters  15.115.4  formal derivations  
15  Th 11/7  Kalman Filters, DBN's, particle filters;  15.5  
16  Tu 11/12 
DBNs; particle filtering; begin decision theory; MDP 
17.1 
The basics of utility theory: 16.116.3 from R\&N; 

17 18 
Th 11/14 Tu 11/19 
Markov decision processes: Bellman equations, value iteration, policy iteration  17.117.4.1 
Sutton &
Barto's excellent book on reinforcement learning and MDP's Value Iteration Demo 

19  Th 11/21 
Machine Learning Decision trees 
18.118.4  
20  Tu 11/26 
Neural networks Theory of learning 
18.618.7 18.5 
generalization error theorem proved in class 
A demo of LeNet, a neural network for opticalcharacter recognition, is
available here.
Click the links on the left to see how it does on various inputs. The
figure shows the activations of various layers of the network, where
layer1 is the deepest. (For more detail, see the papers on the LeNet
website, such as
this one.) original "Occam's Razor" paper 

21  Tu 12/3 
Theory of learning Supportvector machines 
18.5 18.9 
tutorial on SVM's  
22  Th 12/5  Bagging and Boosting  18.10  margins "movie"  introductory chapter from Boosting: Foundations and Algorithms by Schapire & Freund  
23  Tu 12/10 
Clustering Learning Bayes net and HMM parameters 
20.120.3  
24  Th 12/12  Reinforcement learning in MDP's  21.121.4 
Sutton &
Barto's excellent book on reinforcement learning and MDP's Learning to play keepaway in robocup soccer using reinforcement learning. Scroll down to find flash demos. 