Computer Science 522
· FINAL is on-line - due May 15th
· All students taking this course should join the mailing list. It's also recommended for anyone planning to go to some of the lectures. (I'll announce in advance on the mailing list the topics for each of the more advanced lectures.)
Professor: Boaz Barak - 405 CS Building. Email: Phone: 258- (I prefer email)
Graduate Coordinator: Melissa Lawson - 310 CS Building - 258-5387 firstname.lastname@example.org
Grading: 50% homework (expected 5-6 assignments), 50% take-home final.
Prerequisites: There are no formal prerequisites, but I will assume some degree of mathematical maturity and familiarity with basic notions such as functions, sets, graphs, O notations, and probability on finite sample spaces. See the following assumed knowledge document to brush up on this stuff.
This is a graduate course in computational complexity, including both "classical" results from the last few decades, and very recent results from the last year.
Complexity theory deals with the power of efficient computation. While in the last century logicians and computer scientists developed a pretty good understanding of the power of finite time algorithms (where finite can be an algorithm that on on a 1K input will take longer to run than the lifetime of the sun) our understanding of efficient algorithms is quite poor. Thus, complexity theory contains more questions, and relationships between questions, than actual answers. Nevertheless, we will learn about some fascinating insights, connections, and even few answers, that emerged from complexity theory research.
Among the questions we will tackle (for various types of computational problems) are:
For a more technical description, see the description from the course catalog. I plan to include also some topics, including some results from the last couple of years, that were not taught in the last iteration of the course. These include zig-zag construction of expander graphs, Reingold's deterministic O(log n)-space algorithm for undirected s-t connectivity, full proof of PCP theorem (following Dinur's approach). A general recurring theme in this course will be the notion of obtaining combinatorial objects with random and pseudorandom properties.
Perhaps the question that will occur to you after attending this course is "How is it that all these seemingly intelligent people have been working on this for several decades and have not managed to prove even some ridiculously obvious conjectures?". The answer is that we need your help to solve some of these problems, and get rid of this embarrassing situation.
Our main textbook will be the upcoming book Computational Complexity: A Modern Approach by Sanjeev Arora. Drafts of the book will be available from Pequod Copy. Whenever presenting material that is not in this book, I will provide references to the relevant research papers or other lecture notes.
Some lecture notes of similar courses: Sanjeev Arora, Rudich and Blum, Madhu Sudan, Luca Trevisan, Russel Impagliazzo (2), Chris Umans, Oded Goldreich (see also his texts on computational complexity) Feige & Raz, Moni Naor, Valentine Kabanets, Muli Safra (2),
Pseudorandomness courses: Salil Vadhan, Luca Trevisan, David Zuckerman, Oded Goldreich, Venkat Guruswami
PCP and hardness of approximation: Uri Feige, Guruswami and Odonnell,
Other courses: Sanjeev Arora: theorist's toolkit, Madhu Sudan: essential coding theory, Linial and Wigderson: expander graphs
|Feb 20||Janet Yoon|
|March 6||Mohammad Mahmoody|
|March 27||JiMin Song|
|April 10||Janek Klawe|
|April 24||Nadia Heninger|
FINAL - DO NOT DOWNLOAD UNTIL YOU ARE READY TO WORK ON IT (LaTeX source)
Reading: Chapter 1. Goldreich's text until page 9 (section 3.2). See also Computational Complexity theory - a teaser by Oded Goldreich.
Additional reading: New Yorker article on Alan Turing.
Reading: Chapter 2,5,6.
Additional reading: My philosophical musings are not original and largely based on what I believe to be the best survey ever written on complexity: A Personal View of Average Complexity by Russell Impagliazzo. One way to phrase complexity theory's mission is to find out which one of Russell's worlds do we live in.
See Goldreich's text on the P, NP and NP-completeness. You might also want to take a look at this powerpoint presentation on history and importance of NP,NPC etc. by Gilat Kol and Tal Kramer from Moni Naor's course on Key Papers in CS.
More philosophically minded people might also be interested in the following two surveys by Scott Aaronson:
Is P vs. NP formally independent and
NP complete problems and physical reality.
Reading: Chapter 4.
Additional reading: Diagonalization: survery by Lance Fortnow.
Reading: Chapter 4 and emailed handout.
Additional reading:: Goldreich's text on space complexity
Reading: Chapter 7 and handout.
Additional reading: Goldreich's text on randomized complexity classes. The following books are recommended for a more in depth look at discrete probability and randomized computation:
Reading: handout on random and pseudorandom walks
Additional reading: See the following excellent
notes by Hoory, Linial and Wigderson on expander graphs.
Additional reading: The zig zag product was defined in this
paper by Reingold, Vadhan and Wigderson. However, we're going to use a simple proofs that
is taken from the following two papers by Rozenman and Vadhan and Reingold, Trevisan and Vadhan.
Additional reading: Goldreich's text on UPATH in L
contains a good presentation of that algorithm (along with a nice appendix on expanders). In particular, it contains
more details on how to implement the algorithm in logarithmic space. The original paper of Reingold can
be found here. We note that a very close
result (UPATH in space O(log n loglog n)) was obtained simultaneously and independently by Vladimir
Trifonov (a grad student in UT Austin) using quite different tools.
Counting problems, definition of #P, permanent. Hardness of unique SAT. Toda's theorem.
Finished proof of Toda's theorem. Definition if interactive proofs. Graph non-isomorphism in IP.
Handout: Interactive Proofs
Three wonderful surveys for spring break reading (two old and one new):
Additional reading: Goldreich's text on the
proof that IP[k] is in AM[k+3].
Goldreich's text on IP=PSPACE. Trevisan's
lecture notes on IP=PSPACE
Handout: Homework 4 (LaTeX source).
Additional reading: Lectures 11 and
Trevisan's pseudorandomness course.
Goldreich's text on pseudorandom generators (relevant materials is until page 22).
Handout on averaging, hybrid, and prediction vs. distinguishing
Handout: Lecture outline: XOR and hardcore lemmas
Additional reading: Today's lecture was all taken from the paper "Hard-core distributions for somewhat hard problems" of Russell Impagliazzo (see also his wikipedia entry). The XOR lemma has several different proofs with varying parameters, see this survey by Goldreich, Nisan and Wigderson.
A derandomized version of the XOR lemma, that given a function on n bits only needs to move to
a function on O(n+k) bits to get hardness similar to the hardness the original version's with k repetitions (and hence nk bits)
was given in this paper by Impagliazzo and Wigderson. In particular,
using what we've seen this paper shows how to get BPP=P from functions with 1-1/n hardness for exponential circuits.
(We'll show how to get functions like that from functions that are worst-case hard next time).
Handout:Lecture Outline: Worst-case assumptions
I highly recommend this survey by Valentine Kabanets on derandomization. It contains brief descriptions and pointers to many of the latest results and exciting research directions of this field.
Getting BPP=P: The "XOR Lemma free" approach mentioned in the handout to getting BPP=P was given in this paper by Sudan, Trevisan and Vadhan (STV). As mentioned before, there's a previous alternative approach by Impagliazzo and Wigderson using a "XOR Lemma on steroids". There's even a third "NW generator free" approach by Shaltiel and Umans (see below).
Hardness vs. randomness tradeoff: The results shown in class generalize to a tradeoff between the assumption on the circuit size required to compute functions in E and the resulting time to derandomize BPP. However, the currently known approach to get an optimal tradeoff (by this we mean optimal w.r.t. to "black-box" proofs) is somewhat different (and in particular uses error correcting codes but not the NW generator). This is obtained in the following two papers by Shaltiel and Umans and Umans. You can also see a PowerPoint presentation by Ronen Shaltiel on this topic.
Uniform derandomization: One fascinating theorem I did not prove is the result that either BPP=EXP or there's a non-trivial subexponential derandomization of BPP. This result is from this Impagliazzo-Wigdersion 98 paper, but a more general and perhaps better proof can be found in this paper by Vadhan and Trevisan. The results that even a uniform derandomization require circuit lower bounds come from these two papers by Impagliazzo-Kabanets-Wigderson and Impagliazzo-Kabanets
Randomness extractors: Another topic we did not touch is randomness extractors which are used not to derandomize BPP but to execute probabilistic algorithms without access to truly independent and uniform coin tosses. The following survey by Ronen Shaltiel is a good starting point for information on this topic. See also the following presentation by Salil Vadhan.
More resources on derandomization and pseudorandomness: As you can see, one could make a whole course out of the
topics we did not cover on pseudorandomness, and indeed there several such courses with excellent lecture notes were
given. Some recommended links are: Shaltiel
Exercise 5 (Latex Source)
We follow the proof of the PCP theorem in this paper by Irit Dinur.
A veriant of this proof is also described in these lecture notes by Guruswami and Odonnell
(these two sources follow a slightly different approach, and we'll also do some things a bit different from both these sources).
We only sketched the two remaining minor steps: reducing to constant degree (each variable
appears in constantly many clauses) and reducing from a constant number q of queries
to t queries (losing a factor of say 10q in the gap and increasing the alphabet from
sigma to sigma^q).
Handout: lecture outline
Reading: Chapter 19.6 of the textbook. Paper by Kushilevitz and Mansour. See also Trevisan's lecture on Goldreich-Levin algorithm.
Additional reading: See the following sources for more advanced related material:
Circuit lowerbounds: draft chapter from book
Additional reading: I highly recommend looking at the excellent Boppana-Sipser survey on circuit
Handouts: lowerbounds chapter Homework 6 (LaTeX source)
Additional reading: The Boppana-Sipser survey also contains a description of the switching lemma.
There's also a survey on the switching lemma by Paul Beame.
Additional reading:the original paper by Razborov and Rudich
Additional reading: excellent survey by Aharonov.
lecture notes by Umesh Vazirani