## Final Exam Information |
## Fall 2003 |

The final will be held at 8:30am on Friday, January 16 in Friend Center 101.

If you do better on the exam than the homeworks, then the final exam will be worth 35% of your final grade . Otherwise, if you did better on the homeworks than the exam, it will be worth only 25% of your grade.

The exam will be closed book. However, **you may bring a one-page
"cheat sheet"** consisting of a single, ordinary 8.5"x11"
blank sheet of paper with whatever notes you wish written upon it. You may
write on both the front and the back. However, it must be handwritten (not
computer generated or photocopied) in your own handwriting.

Also, **be sure to bring a calculator.**

You may not use the text book, your notes, a computer or any other materials during the exam.

The exam will include short-answer questions, especially multiple choice and modified true-false questions.

Modified true-false questions consist of a statement, part of which is underlined. Your response to the statement should be either "Correct", meaning that the statement is correct as is, or you should cross out part or all of what is underlined and replace it with the correct word or phrase. For instance, the statement might be:

The capital of New Jersey is

Albany.

Here, the correct response is to cross out the word "Albany" and replace it with "Trenton".

If the statement is correct as is, you should mark it "Correct" even if you can think of a "better" answer. For instance, suppose the statement is:

Trenton is a city in the

United States.

You should simply mark this sentence as "Correct" (which it is), even though "New Jersey" might be a "better" completion of the sentence.

Unless otherwise indicated, the multiple choice questions may have several
correct answers. You should circle * all* that apply. For instance:

The colors of the American flag include which of the following?

- red
- pink
- blue
- purple.

Obviously, the right answer is to circle both (a) and (c). If none of the answers are correct, write the word "None" next to the choices.

No credit will be given for questions left unanswered, so you should be sure to answer all questions, even if you are only taking your best guess.

The exam will also include a smaller number of problems similar to the more straightforward of the ones assigned on the homework written exercises.

Here are a few sample questions.

- In a temporal model,
trackingis the problem of computing the state distribution at thecurrenttime steptgiven all available evidence.

- Policy improvement and policy evaluation are the two key steps of the
value iterationalgorithm.

- Which of the following are algorithms for determining the satisfiability of a CNF formula?

- depth-first search
- DPLL
- Viterbi
- Walksat

- Which of the following are guaranteed to use memory that is bounded by a polynomial in the depth of the optimal solution (assuming a fixed branching factor)?

- breadth-first search
- iterative deepening search
- bidirectional search

The exam problems will be similar to the following (though in most cases, *considerably*
shorter given the time constraints of the exam):

- HW#1, problem 3
- HW#2, problem 1
- HW#4, problem 2
- HW#5, problem 1
- HW#6, problem 1a,b
- HW#7, problem 1

In principle, anything covered in lecture or in the assigned readings is "fair game". However, realistically, you can expect that the emphasis will be placed on those same topics that were emphasized in lecture.

Below is a list of topics, concepts and algorithms that you should be familiar with. I have attempted to make this an exhaustive list, although I cannot guarantee that I did not miss an item or two.

- search and problem solving
- properties of search algorithms (completeness, optimality, time and space efficiency)

- uninformed (blind) search
- BFS
- uniform-cost search
- DFS
- depth-limited search
- IDS
- bidirectional search
- informed (heuristic) search
- best-first search
- A*
- heuristic functions (consistent, admissible)

- local search
- objective function
- hill climbing
- simulated annealing
- genetic algorithms

- adversarial search
- minimax algorithm
- alpha-beta pruning
- evaluation functions
- tricks for speeding up game playing programs

- logic
- elements of a logic (semantics, syntax, models, etc.)
- entailment
- propositional logic (symbols, literals, etc.)
- horn clauses/sentences

- inference algorithms
- soundness and completeness
- model checking
- inference rules
- resolution
- DPLL
- walksat

- formulating problems as satisfiability instances
- planning problems

- first-order logic
- probability
- events and atomic events
- random variables
- distribution
- joint distribution
- conditional probability
- marginal probability
- independence
- conditional independence
- Bayes' rule
- expected value
- conditional expected value

- Naive Bayes algorithm
- Bayesian networks
- meaning and interpretation
- Markov blanket
- inference
- variable elimination
- direct sampling, rejection sampling, likelihood weighting
- MCMC

- Markov chains
- stationary distribution

- temporal models
- states, observations, evidence, etc.
- HMM's
- belief state
- filtering
- prediction
- smoothing (forward-backwards algorithm)
- Viterbi
- Kalman filters
- DBN's
- particle filters

- speech recognition
- phones, phonemes, frames, etc.
- triphone model
- three-state phone model
- acoustic model
- language model
- bigram/trigram model
- pronunciation model

- utility
- MDP's
- states, rewards, actions, etc.
- policy
- optimal policy
- utility
- discounted reward
- Bellman equations
- value iteration
- policy evaluation
- policy iteration
- convergence properties
- policy improvement theorem

- POMDP's
- learning
- types of learning problems (supervised, regression, classification, etc.)
- Occam's razor
- conditions for effective learning
- features (a.k.a. attributes or dimensions)
- class (a.k.a. label or output)
- instances and examples
- training error, test error, generalization error
- hypotheses
- overfitting
- theory - PAC bounds

- learning algorithms
- decision trees
- how to grow - impurity measures
- pruning

- AdaBoost
- weak hypotheses and weak learning

- SVM's
- kernel trick

- neural nets
- gradient descent and backprop

- decision trees
- learning parameters of a Bayes net
- principles:
- maximum likelihood
- MAP
- full Bayesian

- EM
- learning in MDP's (reinforcement learning)
- model-based approach (adaptive dynamic programming)
- exploration versus exploitation
- model-free approach
- TD algorithms
- Q-learning

- philosophy / future of AI
- Turing test
- Chinese room experiment