Quick links

Colloquium

Computing Equilibria in Games

Date and Time
Wednesday, March 5, 2008 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Konstantinos Daskalakis, from UC Berkeley
Host
Sanjeev Arora
Game Theory is important for the study of large competitive environments, such as the Internet, the market, and even social and biological systems. A key tool in analyzing such systems (games) is the study of their stable states, that is, their equilibria. Understanding the properties of equilibria can give insights into the effectiveness of economic policies, engineering decisions, etc. However, due to the large scale of most interesting games, the problem of computing equilibria cannot be separated from complexity considerations. Motivated by this challenge, I will discuss the problem of computing equilibria in games. I will show first that computing a Nash equilibrium is an intractable problem. It is not NP-complete, since, by Nash's theorem, an equilibrium is always guaranteed to exist, but it is at least as hard as solving any fixed point computation problem, in a precise complexity-theoretic sense. In view of this hardness result, I will present algorithms for computing approximate equilibria. In particular, I will describe algorithms that achieve constant factor approximations for 2-player games and give a quasi-polynomial time approximation scheme for the multi-player setting. Finally, I will consider a very natural and important class of games termed anonymous games. In these games every player is oblivious to the identities of the other players; examples arise in auction settings, congestion games, and social phenomena. I will introduce a polynomial time approximation scheme for the anonymous setting and provide surprising connections to Stein's method in probability theory. Bio: Constantinos (or Costis) Daskalakis grew up in Athens, Greece, where he received his undergraduate degree in Electrical and Computer Engineering from the National Technical University of Athens. In 2004 he moved to California to pursue a Ph.D. in Computer Science at U.C. Berkeley under the supervision of Professor Christos H. Papadimitriou. Costis’s work has focused on computational game theory and applied probability, in particular the computation of equilibria in games, the study of social networks, and computational problems in biology. His research is motivated by two questions: "How does the algorithmic perspective influence economics, biology, physics, and the social sciences?" And, "how does the study of computational problems arising from areas outside computer science transform the theory of computation?"

WordNet: A lexical resource for Natural Language Processing

Date and Time
Wednesday, February 27, 2008 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Christiane Fellbaum, from Princeton
Host
Robert Schapire
We discuss the contents, structure and uses of WordNet, a large machine-tractable lexical database developed and maintained at Princeton. This semantic network contains information about tens of thousands of English lexicalized concepts. WordNet enables automatic systems to "understand" the meanings of words, to measure their semantic similarity, and to disambiguate words with multiple meanings. A powerful tool for Natural Language Processing, WordNet is used in a wide range of applications including information retrieval, question answering, machine translation and automatic reasoning and inference.

Network Data Streaming - A Computer Scientist's Journey in Signal Processing

Date and Time
Thursday, December 13, 2007 - 4:00pm to 5:30pm
Location
Friend Center 013
Type
Colloquium
Speaker
Jim Xu, from Georgia Institute of Technology
Host
Xiaojuan Ma
With the rapid growth of the Internet, network link speeds have become faster every year to accommodate more Internet users. Measuring and monitoring the traffic on such high-speed links has become an ever challenging problem. Our team is among the first to realize that data streaming algorithms have the potential to become very useful tools for measuring and monitoring high-speed networks. Data streaming is concerned with processing a long stream of data items in one pass using a small working memory in order to estimate certain statistics of the stream. The challenge is to use this small memory to ``remember'' as much information {\it pertinent to this estimation} as possible. However, existing data streaming algorithms are typically designed for processing a single stream of data for a single type of statistic, and most of these algorithms cannot operate at very high link speed. Targeting the deficiency of existing approaches, we have investigated new data streaming paradigms and mechanisms that allow us to perform large-scale distributed data streaming on tens of thousands of high-speed links and nodes, and aggregate, compress, and interpret these streaming results, for better measurement and monitoring of large networks. We also discovered that the applications of data streaming go far beyond network measurement and monitoring. Recently, we have successfully applied the data streaming techniques to a seemingly unrelated area: query routing in an unstructured P2P network. In this talk, I will talk about these new applications as well as our theory of formulating the data streaming problems in CS as the channel design problem in EE. BIO: Jun (Jim) Xu is an Associate Professor in the College of Computing at Georgia Institute of Technology. He received his Ph.D. in Computer and Information Science from The Ohio State University in 2000. His current research interests include data streaming algorithms for the measurement and monitoring of computer networks, algorithms and data structures for computer networks, network security, and performance modeling and simulation. He received the NSF CAREER award in 2003 for his ongoing efforts in establishing fundamental lower bound and tradeoff results in networking. He is a co-author of a paper that won the Best Student Paper Award from 2004 ACM Sigmetrics/IFIP Performance joint conference, and the faculty advisor of the student winners. He received 2006 IBM faculty award for making fundamental contributions to performance evaluation methodologies.

Taming Concurrency: A Program Verification Perspective

Date and Time
Wednesday, December 12, 2007 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Shaz Qadeer, from Microsoft Research
Host
David Walker
Concurrency, as a basic primitive for software construction, is more relevant today than ever before, primarily due to the multi-core revolution. General-purpose software applications must find ways to exploit concurrency explicitly in order to take advantage of multiple cores. However, experience has shown that explicitly parallel programs are difficult to get right. To deliver compelling software products in the multi-core era, we must improve our ability to reason about concurrency.

Generalizing well-known sequential reasoning techniques to concurrent programs is fundamentally difficult because of the possibility of interference among concurrently-executing tasks. In this lecture, I will present reduction and context-bounding, two ideas that alleviate the difficulty of reasoning about interference by creating a simpler view of a concurrent execution than an interleaving of thread instructions. Reduction reduces interference by making a sequence of instructions in a thread appear as a single atomic instruction; it allows the programmer to view an execution as a sequence of large atomic instructions. Context-bounding reduces interference by focusing on executions with a small number of context switches; it allows the programmer to view an execution as a sequence of large thread contexts. I will present the theory behind these two approaches and their realization in a number of program verification tools I have developed over the years.

Bio: Shaz Qadeer is a researcher in the Software Reliability Research group at Microsoft Research. The goal of his research is to develop tools for improving the productivity of programmers. He has worked on many program verification tools, spanning the range from run-time verification to model checking to static analysis, with an emphasis on tools for concurrent programs. Shaz received his Ph.D. at UC Berkeley and worked at Compaq Systems Research Center before joining Microsoft Research.

Understanding the Networked Audience

Date and Time
Thursday, November 29, 2007 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Nancy Baym
Host
Edward Felten
Once viewed as passive consumers, media fans today are not just active, they are organized. Fans use the internet to form communities and networks, to produce their own artistic materials, to publicize what catches their fancies, to form personal alliances and friendships, to petition producers, and even to raise money for charities. They influence media producers in unprecedented ways, challenging old hierarchies. In a world of MySpace and YouTube, engaging fans is no longer a matter of mass communication, but of interpersonal relationship formation. This talk explores these changes and their implications for fans, producers, artists, analysts, and policy-makers.

Bio: Nancy Baym is an Associate Professor of Communication Studies at the University of Kansas. She has written many widely-cited articles about online fan community and social aspects of online interaction and is the author of the book Tune In, Log On: Soaps, Fandom, and Online Community (Sage Press, Inc.). Her work has been translated into several languages. She is a co-founder and Past-President of the Association of Internet Researchers, an international interdisciplinary association. She is an award-winning teacher whose courses address the use of new communication technologies in creating identities, relationships and communities, interpersonal communication, and qualitative research methods. She serves on the editorial boards of the premiere journals in the field, including New Media & Society, The Journal of Communication, The Journal of Computer-Mediated Communication, and The Information Society. Her blog about fan activity on the internet can be found at http://www.onlinefandom.com

Taming Wild Concurrent Programs

Date and Time
Wednesday, November 7, 2007 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Yuanyuan Zhou
Host
Kai Li
The reality of multi-core hardware has made concurrent programs pervasive. Unfortunately, writing correct concurrent programs is difficult. As programmers are used to sequential thinking, concurrent programs can easily run wild. Additionally, due to the non-deterministic nature, concurrency bugs are notoriously hard to expose during testing and to reproduce for diagnosis.

In this talk, I will describe our recent research effort in combating wild concurrent programs. More specifically, I will talk about our findings in (1) hardware and system support as well as using data mining and machine learning/NLP techniques to automatically infer programmers' synchronization intent and detect concurrency bugs; (2) systematic approaches to identify and cover corner-case interleavings and expose more concurrency bugs during in-house testing; and (3) real world concurrency bug characteristics that can shed some lights on future research on concurrency bug detection, testing and programming language design.

Bio:Yuanyuan Zhou is an associate professor in the Department of Computer Science at Univ of Illinois at Urbana Champaign. Prior to UIUC, she worked at NEC Research Institute as a scientist after completing her Ph.D at Princeton in 2000. Her research interests span the areas of operating systems, architecture, storage systems and software reliability. She was the recipient for the Alfred Sloan Fellowship 2007, UIUC Gear Faculty Award 2006, NSF Career-2004 award, the CRA-W Anita-Borg Early Career Award 2005, the DOE Early Career Principle Investigator Award 2005, the IBM Faculty Award 2004 & 2005, and the IBM SUR-2003 award. She has 3 papers selected into the IEEE Micro Special Issue on Top Picks from Architecture Conferences and one best paper in SOSP 2005. She was also selected into the "Incomplete List of Teachers Ranked as Excellent by Their Students” in 2003 and 2006 at UIUC.

Grand Challenges in Complexity Theory

Date and Time
Thursday, October 25, 2007 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Alexander Razborov, from IAS
Host
Bernard Chazelle
Modern complexity theory is a vast, loosely defined area. Its methods and methodology can be successfully applied whenever one has a set of tasks, a class of admissible solutions, and numerical characteristics to measure the ``quality'' of a solution. This talk will focus on two specific scenarios: classical computational complexity and proof complexity.

The story I will tell, however, is highly representative of the methods that have been tried in the field at large, with its mixed bag of successes, setbacks, and promising directions.

I will try to reveal some of the tight, beautiful and unexpected connections existing between different branches of complexity theory. I will discuss the ``grand challenges'' of the field, including "P vs NP" and questions about the power of classical proof systems.

This talk will assume no prior knowledge in theoretical computer science.

Three Beautiful Quicksorts

Date and Time
Wednesday, October 17, 2007 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Jon Bentley, from Avaya Labs Research
Host
Brian Kernighan
This talk describes three of the most beautiful pieces of code that I have ever written: three different implementations of Hoare?s classic Quicksort algorithm. The first implementation is a bare-bones function in about a dozen lines of C. The second implementation starts by instrumenting the first program to measure its run time; a dozen systematic code transformations proceed to make it more and more powerful yet more and more simple, until it finally disappears in a puff of mathematical smoke. It therefore becomes the most beautiful program I never wrote. The third program is an industrial-strength C library Qsort function that I built with Doug McIlroy. A theme running through all three implementations is the power of elegance and simplicity. (This talk expands my Chapter 3 in Beautiful Code, edited by Oram and Wilson and published by O?Reilly in July, 2007.)

Probabilistic Graphical Models and Algorithms for Integrative Bioinformatics

Date and Time
Wednesday, October 10, 2007 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Eric Xing, from CMU
Host
Olga Troyanskaya
Probabilistic graphical model is a formalism that exploits the conjoined talents of graph theory and probability theory to build complex models out of simpler pieces. It offers a powerful language to elegantly define expressive distributions under complex scenarios in high-dimensional space, and provides a systematic computational framework for probabilistic inference. These virtues have particular relevance in bioinformatics, where many core inferential problems e.g., linkage analysis, phylogenetic analysis, network reconstruction are already naturally expressed in probabilistic terms, and must deal with experimental data with complex structure and temporal and/or spatial dynamics.

I will discuss our recent work on graphical model inferential methodology in three areas in bioinformatics: (1) Population structure and recombination hotspot inference, using a novel approach based on Dirichlet process priors. I present a hidden Markov version of the Dirichlet process which allows us to infer recombination events among haplotypes in an “open” ancestral space. (2) Comparative genomics prediction of imperfectly conserved transcription factor binding sites, where multi-resolution phylogenetic inference combines with Markovian inference to provide sensitive detection of motifs and their evolutionary turnovers in eleven Drosophila species. (3) Reverse-engineering of temporally rewiring networks from gene expression time courses, where a novel hidden temporal exponential random graph model is employed to model temporal evolution of network topologies during a biological process, and to facilitate the inference of transient (rather than a single universal) regulatory circuitry underlying each time-point of the microarray time series.

Bio: Eric Xing is an assistant professor in the Machine Learning Department, the Language Technology Institute, and the Computer Science Department within the School of Computer Science at Carnegie Mellon University. His principal research interests lie in the development of machine learning and statistical methodology; especially for building quantitative models and predictive understandings of the evolutionary mechanism, regulatory circuitry, and developmental processes of biological systems; and for building computational intelligence systems involving automated learning, reasoning, and decision-making in open, evolving possible worlds. Professor Xing received his B.S. in Physics from Tsinghua University, his first Ph.D. in Molecular Biology and Biochemistry from Rutgers University, and then his second Ph.D. in Computer Science from UC Berkeley. He has been a member of the faculty at Carnegie Mellon University since 2004, and his current work involves, 1) graphical models, Bayesian methodologies, inference algorithms, and optimization techniques for analyzing and mining high-dimensional, longitudinal, and relational data; 2) computational and comparative genomic analysis of biological sequences, systems biology investigation of gene regulation, and statistical analysis of genetic variation, demography and disease linkage; and 3) application of statistical learning in text/image mining, vision, and machine translation.

The Future of Reputation: Gossip, Rumor, and Privacy on the Internet

Date and Time
Monday, October 8, 2007 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Daniel Solove, from George Washington University
Host
Edward Felten
Follow us: Facebook Twitter Linkedin