Quick links

CS Department Colloquium Series

Electrical Flows and Laplacian Systems: A New Tool for Graph Algorithms

Date and Time
Wednesday, February 23, 2011 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Moses Charikar
In recent years, the emergence of massive computing tasks that arise in context of web applications and networks has made the need for efficient graph algorithms more pressing than ever. In particular, it lead us to focus on reducing the running time of the algorithms to make them as fast as possible, even if it comes at a cost of reducing the quality of the returned solution. This motivates us to expand our algorithmic toolkit to include techniques capable of addressing this new challenge.

In this talk, I will describe how treating a graph as a network of resistors and relating the combinatorial properties of the graph to the electrical properties of the resulting circuit provides us with a powerful new set of tools for the above pursuit. As an illustration of their applicability, I will use these ideas to develop a new technique for approximating the maximum flow in capacitated, undirected graphs that yields the asymptotically fastest-known algorithm for this problem.

Aleksander is a PhD candidate in Computer Science at MIT, advised by Michel Goemans and Jonathan Kelner. His research focuses on algorithmic graph theory, i.e. design and analysis of very efficient (approximation) algorithms for fundamental graph problems. He also enjoys investigating topics in combinatorial optimization - especially the ones involving dealing with uncertainty.

Average-case solutions to hard problems

Date and Time
Tuesday, February 15, 2011 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Sanjeev Arora
Many important computational problems are affected by various computational barriers that make them intractable in the worst case. In spite of this, it is often possible to get around these barriers and produce useful, if not optimal solutions for most instances of the problem. I will use several examples from my work to illustrate how considering an average or a typical case of the problem can be used to produce useful algorithms and mechanisms.

The first example deals with the problem of assigning medical graduates to residency programs in the presence of married couples. The problem of finding a stable matching between residency programs and doctors without couples is solved by the famous Gale-Shapley algorithm. In the presence of couples, such a matching may not exist, and it is NP-hard to determine whether it exists or not. Still, in a large random market we show how to find a good stable outcome with high probability.

The second example deals with aggregating noisy comparison signals, such as sport competition results, into a ranking most consistent with the observed signals. The worst-case version of the problem, known as the Minimum Feedback Arcset problem is NP-hard. We give an efficient algorithm for the average-case version of the problem.

Bootstrapping Accountability in the Internet We Have

Date and Time
Wednesday, December 8, 2010 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Michael Freedman
The lack of accountability makes the Internet vulnerable to numerous attacks, including prefix hijacking, route forgery, source address spoofing, and DoS flooding attacks. This work takes a "dirty-slate" approach to bring accountability to the present Internet with low-cost and deployable enhancements. In this talk, we will present IPA, a network architecture that uses the readily available top-level DNSSEC infrastructure and BGP to bootstrap accountability. We integrate IPA with a suite of security building blocks to combat various network-layer attacks. Our evaluation shows that IPA introduces modest overhead, is gradually deployable, and offers security incentives to early adopters.

Xiaowei Yang is an assistant professor in the Department of Computer Science at Duke University. Before joining Duke, she was an assistant professor in the Department of Computer Science at the University of California at Irvine. She received a PhD in Computer Science from Massachusetts Institute of Technology, and a BE from Tsinghua University. She is a receipt of the NSF CAREER award.

What Human-Computer Interaction and Medicine Can Offer Each Other

Date and Time
Monday, December 13, 2010 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Speaker
Dan Morris, from Microsoft Research
Host
Rebecca Fiebrink
Our work sits at the intersection of Human-Computer Interaction (HCI) and Medicine. That means that 50% of what we do is applying HCI methodology to problems in medicine, so we can spend the other 50% of our time "borrowing" technologies from medicine and applying them to HCI. This talk, therefore, will focus on two topics: (a) patient-friendly medical information displays, and (b) sensing physiological signals for computer input. In the first part of the talk, I'll present our work on making electronic medical record data more useful to patients, work that spans both qualitative research ("what *should* patient interfaces look like?") and technology research ("how do we actually build that without asking doctors to spend hours explaining everything to us?"). In the second part of the talk, I'll present our work on using sensing techniques that medical science has known about for centuries (like measuring electrical muscle activity, and measuring body acoustics) to build new computer input systems. This part of the talk will have cool videos, so bring your popcorn.

Dan Morris is a researcher in the Computational User Experiences group at Microsoft Research; he is interested in novel input devices, patient-facing medical technology, and computer support for music and creativity.

Dan studied neurobiology as an undergraduate at Brown, and developed brain-computer interfaces: first at Brown, and later as an engineer at Cyberkinetics, Inc. His PhD thesis at Stanford focused on haptic rendering and physical simulation for virtual surgery, and his work since coming to MSR (in 2006) has included using physiological signals for input systems, designing patient-friendly information displays for hospitals, and generating automatic accompaniment for sung melodies. If you would like to see both his research talents and his (lack of) performance talents showcased in one four-minute masterpiece, search for "Microsoft Songsmith Commercial".

The Challenges (and Joys!) of Personalizing Web Search

Date and Time
Wednesday, December 1, 2010 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Speaker
Sep Kamvar, from Stanford University
Host
Olga Troyanskaya
The personalization of search results has long been seen as an important strategic direction for search engines. However, it was not until the past few years where personalized search was possible at scale. In this talk, I will talk about the algorithms that made web-scale personalized search possible. I will discuss some special characteristics of the structure of the web (and the eigenstructure of the web matrix), and describe the computational techniques used to exploit this structure to compute of PageRank much faster than by standard methods. The resulting speed allowed the computation of an individualized ranking for every user on the web, resulting in a dramatic improvement in search relevance.

Sep Kamvar is a consulting professor of Computational and Mathematical Engineering at Stanford University. His research focuses on personal and social models for search.

From 2003 to 2007, Sep was the head of personalization at Google. Prior to Google, he was founder and CEO of Kaltix, a personalized search company that was acquired by Google in 2003.

Sep is the author of two books and over 40 technical publications and patents in the fields of search and social computing. He is on the technical advisory boards of several companies, including Aardvark, Etsy, and Hunch. His artwork has been exhibited at the Museum of Modern Art in New York, the Victoria and Albert Musem in London, and the National Museum of Contemporary Art in Athens.

Geometry and Computation - Classical Problems From a CS Perspective

Date and Time
Tuesday, November 9, 2010 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Sanjeev Arora
In this talk I will demonstrate instances where natural CS problems lead to classical problems in mathematics. In particular, to basic problems in geometry involving lines and points in vector spaces. In some cases these are well known open problems and in others new variants of old theorems. We will see how tools from CS are useful in the study of these problems. In the first part I will explain how constructions of Randomness Extractors are related to the Finite Field Kakeya conjecture. In the second part I will show how Locally Correctable Codes lead to generalizations of the Sylvester-Gallai theorem.

Zeev Dvir is currently a postdoc at the department of computer science in Princeton University. He got his Ph.D from the Weizmann Institute in Israel in 2008 under the guidance of Ran Raz and Amir Shpilka. His main field of research is computational complexity, especially circuit complexity, de-randomization and coding theory.

Enabling Large-Scale Data Intensive Computations

Date and Time
Friday, October 22, 2010 - 1:30pm to 3:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Edward Felten
This talk describes a set of distributed services developed at Microsoft Research Silicon Valley to enable efficient parallel programming on very large datasets. Parallel programs arise naturally within scientific, data mining, and business applications. Central to our philosophy is the notion that parallel programs do not have to be difficult to write and that the same program must seamlessly run on a laptop, desktop, a small cluster, or on a large data center without the author having to worry about the details of parallelization, synchronization, or fault-tolerance. We have built several services (Dryad, DryadLINQ, TidyFS, and Nectar) that embody this belief. Our goal is to enable users, particularly scientists of all disciplines, to treat a computer cluster as a forensic, diagnostic, or analytic tool. The talk will describe the details of our infrastructure and the characteristics of some of the applications that have been run on it.

Chandu Thekkath is a researcher at Microsoft Research Silicon Valley. He received his Ph.D. in Computer Science from the University of Washington in 1994. Since then, except for a sabbatical year at Stanford in 2000, he has been in industrial research labs at DEC, Compaq, and Microsoft. He is a fellow of the ACM.

Identifying Dark Latency

Date and Time
Wednesday, September 22, 2010 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Speaker
Richard L. Sites, from Google, Inc.
In astronomy and cosmology, dark matter is hypothetical matter that is undetectable by its emitted radiation, but whose presence can be inferred from gravitational effects on visible matter. [Wikipedia]

By analogy, dark latency in software is latency that is undetectable directly, but whose presence can be inferred from overall application delays. We bring four tools to bear to observe dark latency in some Google web services, then identify and fix several root causes, in low-level Google libraries and in the TCP stack.

The talk makes a case for building and using very low-overhead tools to do bursty tracing of hundreds of thousands of timestamped events per second. A methodology of time-aligning multiple traces (remote procedure call, network, CPU, and lock-contention) across scores of interacting machines is the minimum needed to understand some sources of latency in real web services.

Dick Sites is a Senior Staff Engineer at Google, where he has worked for 6 years. He previously worked at Adobe Systems, Digital Equipment Corporation, Hewlett-Packard, Burroughs, and IBM. His accomplishments include co-architecting the DEC Alpha computers, advancing the art of binary translation for computer executables, adding electronic book encryption to Adobe Acrobat, decoding image metadata for Photoshop, and building various computer performance monitoring and tracing tools at the above companies. He also taught Computer Science for four years at UC/San Diego. Most recently he has been working on Unicode text processing and on CPU and network performance analysis at Google. Dr. Sites holds a PhD degree in Computer Science from Stanford and a BS degree in Mathematics from MIT. He also attended the Master's program in Computer Science at University of North Carolina 1969-70. He holds 34 patents and was recently elected to the U.S. National Academy of Engineering.

Some Open Questions Related to Cuckoo Hashing

Date and Time
Wednesday, September 29, 2010 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Jennifer Rexford
In this talk, we will describe cuckoo hashing, a recently developed hashing scheme that offers the benefits of very high memory utilization and worst-case constant lookup times. We then discuss recent work in the area, including theoretical bounds on performance, practical methods for improving performance, and implementations on a GPU. Along the way, we will suggest several remaining open questions.

The talk will require no prior background on cuckoo hashing, and is aimed for a wide audience, including both theory and systems.

Michael Mitzenmacher is a Professor of Computer Science in the School of Engineering and Applied Sciences at Harvard University. Michael has authored or co-authored over 150 conference and journal publications on a variety of topics, including Internet algorithms, hashing, load-balancing, erasure codes, error-correcting codes, compression, bin-packing, and power laws. His work on low-density parity-check codes received the 2002 IEEE Information Theory Society Best Paper Award and the 2009 SIGCOMM Test of Time Award. His textbook on probabilistic techniques in computer science, co-written with Eli Upfal, was published in 2005 by Cambridge University Press.

Exploring Emotion and Expression through Music Technology

Date and Time
Tuesday, April 27, 2010 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Speaker
Youngmoo Kim, from Drexel
Host
Kenneth Steiglitz
Over the past few decades, advances in computing and signal processing have had a profound impact on music performance, recording, and reproduction. The universal appeal of music, however, lies in its ability to provide expression to our feelings, and the recognition or utilization of these concepts computationally remains beyond the capability of current systems. Expression and emotion are, of course, highly subjective terms, ill-suited to quantitative definition and exploration. There may be considerable disagreement among listeners regarding the emotional content of a particular piece, and the expressive intentions of a highly skilled performer can be equally difficult to quantify. But innovations in sensing, human-computer interaction, and machine learning have the potential to reveal insights into these opaque domains.

This presentation will highlight research by the Music & Entertainment Technology Laboratory (MET-lab) at Drexel University exploring music, emotion, and creative expression under the common vision of making music more interactive and accessible for both musicians and non-musicians. These projects encompass the recognition of emotion, such as a system for dynamic musical mood prediction and a collaborative web game for the collection of emotional annotations, as well as interfaces for expressive performance, including a novel electromagnetic approach to shaping the sound of the acoustic piano and a user-friendly controller for remixing music in terms of emotion. These and other MET-lab efforts are closely coupled with educational initiatives, many of which have been deployed in K-12 outreach programs in the Philadelphia region, to promote learning in Science, Technology, Engineering, and Mathematics (STEM).

Youngmoo Kim is an Assistant Professor of Electrical and Computer Engineering at Drexel University. His research group, the Music & Entertainment Technology Laboratory (MET-lab) focuses on the machine understanding of audio, particularly for music information retrieval. Other areas of active research at MET-lab include analysis-synthesis of sound, human-machine interfaces and robotics for expressive interaction, and K-12 outreach for engineering, science, and mathematics education. Youngmoo received his Ph.D. from the MIT Media Lab in 2003 and also holds Masters degrees in Electrical Engineering and Music (Vocal Performance Practice) from Stanford University. He served as a member of the MPEG standards committee, contributing to the MPEG-4 and MPEG-7 audio standards, and he co-chaired the 2008 International Conference on Music Information Retrieval (hosted at Drexel). His research is supported by the National Science Foundation and the NAMM Foundation, including an NSF CAREER award in 2007.

Follow us: Facebook Twitter Linkedin