Quick links

CS Department Colloquium Series

Systems and Algorithms for Computational Photography

Date and Time
Monday, February 28, 2011 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Szymon Rusinkiewicz
Digital cameras have become one of the most ubiquitous forms of computer in existence, but we still don't have a good handle on how to program them, what data we should program them to capture, how that data should be combined to make the photograph, and if a photograph is even the desired output. I'll talk about two of the tools we've developed to explore these questions.

The first is the "Frankencamera", which is an open API and architecture for programmable cameras that makes it easy to deterministically control the various components of a camera at high frame rates. The goal of the Frankencamera is to enable research in photography, and indeed in graduate-level classes we've found that students equipped with this API are much more able to prototype their novel camera applications than with conventional camera APIs.

The second tool is the discrete Gauss transform, which represents a class of algorithms that are widely used in photographic image processing for denoising, sharpening, and controlling tone and contrast. Incarnations of the discrete Gauss transform include the bilateral filter, the joint bilateral filter, and non-local means. We'll talk about making discrete Gauss transforms fast by representing the problem as resampling in a higher-dimensional space. Finally, time permitting, I'll talk about what we need to do to get these kinds of photographic image processing algorithms to run efficiently on the agglomeration of processors found inside today's cameras.

Andrew Adams is a PhD student working with Marc Levoy at Stanford University. His research interests include computational cameras, fast image processing, and more generally languages and APIs that simplify complex domains. Andrew received a BS in Computer Science and Mathematics from the University of New South Wales in Australia in 2004, and an MS in Computer Science from Stanford in 2006. He has also worked on the Google Streetview project, and collaborates closely with Nokia Research on computational photography. Andrew also enjoys teaching, and has won several teaching awards.

Challenges and Opportunities for Tomorrow's Networks

Date and Time
Wednesday, February 16, 2011 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Speaker
Host
Michael Freedman
The Internet is the underlying fabric for nearly all of our day-to-day communication; it is an undeniable success. More than a billion people around the world now have Internet access, and that number is projected to at least double in the next 10 years. The Internet is seeing increasing penetration in various resource-challenged environments, both in this country and abroad. These new frontiers for networking offer exciting opportunities, but they also pose serious challenges. Internet attacks are proliferating. Networks are becoming increasingly complex and difficult to troubleshoot. More than 20 countries around the world censor at least some Internet content. Our personal and private data is falling into the hands of a few large content providers. The performance of networks in homes, and how humans interact with them, are both poorly understood. These coming challenges will require us to think differently about networking: a discipline that has focused almost exclusively on performance must begin thinking more about people.

In this talk, I will take a retrospective view of some important networking innovations over the past 10 years, surveying some solutions we and others have developed to improve the Internet's security, manageability and availability. I will then take a look forward at new challenges, and argue that to solve the most pressing problems for networking today, we must draw on techniques from a broad range of areas, including policy, economics, programming languages and human-computer interaction.

Nick Feamster is an assistant professor in the College of Computing at Georgia Tech. He received his Ph.D. in Computer science from MIT in 2005, and his S.B. and M.Eng. degrees in Electrical Engineering and Computer Science from MIT in 2000 and 2001, respectively. His research focuses on many aspects of computer networking and networked systems, including the design, measurement, and analysis of network routing protocols, network operations and security, and anonymous communication systems. In December 2008, he received the Presidential Early Career Award for Scientists and Engineers (PECASE) for his contributions to cybersecurity, notably spam filtering. His honors also include a Sloan Research Fellowship, the NSF CAREER award, Technology Review's TR35 award, the IBM Faculty Fellowship, and award papers at SIGCOMM 2006, the NSDI 2005 conference, Usenix Security 2002 and Usenix Security 2001.

Search and the Social Web: Organizing the World's People and Making them Accessible and Useful

Date and Time
Wednesday, March 9, 2011 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
In the past few years, we have seen a tremendous growth in public human communication and self-expression, through blogs, microblogs, and social networks. In addition, we are beginning to see the emergence of a social technology stack on the web, where profile and relationship information gathered by some applications can be used by other applications. This technology shift, and the cultural shift that has accompanied it, offers a great opportunity for computer scientists, artists, and sociologists to study (and organize) people at scale.

In this talk I will discuss how the changing web suggests new paradigms for search and discovery. I will discuss some recent projects that use web search to study human nature, and human nature to improve web search. I will describe the underlying principles behind these projects and suggest how they might inform future work in search, data mining, and social computing.

Sep Kamvar is a consulting professor of Computational and Mathematical Engineering at Stanford University. His research focuses on social computing and information retrieval. From 2003 to 2007, Sep was the head of personalization at Google. Prior to Google, he was founder and CEO of Kaltix, a personalized search company that was acquired by Google in 2003. Sep is the author of two books and over 40 technical publications and patents in the fields of search and social computing. His artwork is in the permanent collections of the Museum of Modern Art in New York and the Museum of Fine Arts in Houston, and has been exhibited in a number of other museums, including the Victoria and Albert Musem in London and the National Museum of Contemporary Art in Athens. He holds a Ph.D. in Scientific Computing and Computational Mathematics from Stanford University, and an A.B. in Chemistry from Princeton University.

Electrical Flows and Laplacian Systems: A New Tool for Graph Algorithms

Date and Time
Wednesday, February 23, 2011 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Moses Charikar
In recent years, the emergence of massive computing tasks that arise in context of web applications and networks has made the need for efficient graph algorithms more pressing than ever. In particular, it lead us to focus on reducing the running time of the algorithms to make them as fast as possible, even if it comes at a cost of reducing the quality of the returned solution. This motivates us to expand our algorithmic toolkit to include techniques capable of addressing this new challenge.

In this talk, I will describe how treating a graph as a network of resistors and relating the combinatorial properties of the graph to the electrical properties of the resulting circuit provides us with a powerful new set of tools for the above pursuit. As an illustration of their applicability, I will use these ideas to develop a new technique for approximating the maximum flow in capacitated, undirected graphs that yields the asymptotically fastest-known algorithm for this problem.

Aleksander is a PhD candidate in Computer Science at MIT, advised by Michel Goemans and Jonathan Kelner. His research focuses on algorithmic graph theory, i.e. design and analysis of very efficient (approximation) algorithms for fundamental graph problems. He also enjoys investigating topics in combinatorial optimization - especially the ones involving dealing with uncertainty.

Average-case solutions to hard problems

Date and Time
Tuesday, February 15, 2011 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Sanjeev Arora
Many important computational problems are affected by various computational barriers that make them intractable in the worst case. In spite of this, it is often possible to get around these barriers and produce useful, if not optimal solutions for most instances of the problem. I will use several examples from my work to illustrate how considering an average or a typical case of the problem can be used to produce useful algorithms and mechanisms.

The first example deals with the problem of assigning medical graduates to residency programs in the presence of married couples. The problem of finding a stable matching between residency programs and doctors without couples is solved by the famous Gale-Shapley algorithm. In the presence of couples, such a matching may not exist, and it is NP-hard to determine whether it exists or not. Still, in a large random market we show how to find a good stable outcome with high probability.

The second example deals with aggregating noisy comparison signals, such as sport competition results, into a ranking most consistent with the observed signals. The worst-case version of the problem, known as the Minimum Feedback Arcset problem is NP-hard. We give an efficient algorithm for the average-case version of the problem.

Bootstrapping Accountability in the Internet We Have

Date and Time
Wednesday, December 8, 2010 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Michael Freedman
The lack of accountability makes the Internet vulnerable to numerous attacks, including prefix hijacking, route forgery, source address spoofing, and DoS flooding attacks. This work takes a "dirty-slate" approach to bring accountability to the present Internet with low-cost and deployable enhancements. In this talk, we will present IPA, a network architecture that uses the readily available top-level DNSSEC infrastructure and BGP to bootstrap accountability. We integrate IPA with a suite of security building blocks to combat various network-layer attacks. Our evaluation shows that IPA introduces modest overhead, is gradually deployable, and offers security incentives to early adopters.

Xiaowei Yang is an assistant professor in the Department of Computer Science at Duke University. Before joining Duke, she was an assistant professor in the Department of Computer Science at the University of California at Irvine. She received a PhD in Computer Science from Massachusetts Institute of Technology, and a BE from Tsinghua University. She is a receipt of the NSF CAREER award.

What Human-Computer Interaction and Medicine Can Offer Each Other

Date and Time
Monday, December 13, 2010 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Speaker
Dan Morris, from Microsoft Research
Host
Rebecca Fiebrink
Our work sits at the intersection of Human-Computer Interaction (HCI) and Medicine. That means that 50% of what we do is applying HCI methodology to problems in medicine, so we can spend the other 50% of our time "borrowing" technologies from medicine and applying them to HCI. This talk, therefore, will focus on two topics: (a) patient-friendly medical information displays, and (b) sensing physiological signals for computer input. In the first part of the talk, I'll present our work on making electronic medical record data more useful to patients, work that spans both qualitative research ("what *should* patient interfaces look like?") and technology research ("how do we actually build that without asking doctors to spend hours explaining everything to us?"). In the second part of the talk, I'll present our work on using sensing techniques that medical science has known about for centuries (like measuring electrical muscle activity, and measuring body acoustics) to build new computer input systems. This part of the talk will have cool videos, so bring your popcorn.

Dan Morris is a researcher in the Computational User Experiences group at Microsoft Research; he is interested in novel input devices, patient-facing medical technology, and computer support for music and creativity.

Dan studied neurobiology as an undergraduate at Brown, and developed brain-computer interfaces: first at Brown, and later as an engineer at Cyberkinetics, Inc. His PhD thesis at Stanford focused on haptic rendering and physical simulation for virtual surgery, and his work since coming to MSR (in 2006) has included using physiological signals for input systems, designing patient-friendly information displays for hospitals, and generating automatic accompaniment for sung melodies. If you would like to see both his research talents and his (lack of) performance talents showcased in one four-minute masterpiece, search for "Microsoft Songsmith Commercial".

The Challenges (and Joys!) of Personalizing Web Search

Date and Time
Wednesday, December 1, 2010 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Speaker
Sep Kamvar, from Stanford University
Host
Olga Troyanskaya
The personalization of search results has long been seen as an important strategic direction for search engines. However, it was not until the past few years where personalized search was possible at scale. In this talk, I will talk about the algorithms that made web-scale personalized search possible. I will discuss some special characteristics of the structure of the web (and the eigenstructure of the web matrix), and describe the computational techniques used to exploit this structure to compute of PageRank much faster than by standard methods. The resulting speed allowed the computation of an individualized ranking for every user on the web, resulting in a dramatic improvement in search relevance.

Sep Kamvar is a consulting professor of Computational and Mathematical Engineering at Stanford University. His research focuses on personal and social models for search.

From 2003 to 2007, Sep was the head of personalization at Google. Prior to Google, he was founder and CEO of Kaltix, a personalized search company that was acquired by Google in 2003.

Sep is the author of two books and over 40 technical publications and patents in the fields of search and social computing. He is on the technical advisory boards of several companies, including Aardvark, Etsy, and Hunch. His artwork has been exhibited at the Museum of Modern Art in New York, the Victoria and Albert Musem in London, and the National Museum of Contemporary Art in Athens.

Geometry and Computation - Classical Problems From a CS Perspective

Date and Time
Tuesday, November 9, 2010 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Sanjeev Arora
In this talk I will demonstrate instances where natural CS problems lead to classical problems in mathematics. In particular, to basic problems in geometry involving lines and points in vector spaces. In some cases these are well known open problems and in others new variants of old theorems. We will see how tools from CS are useful in the study of these problems. In the first part I will explain how constructions of Randomness Extractors are related to the Finite Field Kakeya conjecture. In the second part I will show how Locally Correctable Codes lead to generalizations of the Sylvester-Gallai theorem.

Zeev Dvir is currently a postdoc at the department of computer science in Princeton University. He got his Ph.D from the Weizmann Institute in Israel in 2008 under the guidance of Ran Raz and Amir Shpilka. His main field of research is computational complexity, especially circuit complexity, de-randomization and coding theory.

Enabling Large-Scale Data Intensive Computations

Date and Time
Friday, October 22, 2010 - 1:30pm to 3:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Edward Felten
This talk describes a set of distributed services developed at Microsoft Research Silicon Valley to enable efficient parallel programming on very large datasets. Parallel programs arise naturally within scientific, data mining, and business applications. Central to our philosophy is the notion that parallel programs do not have to be difficult to write and that the same program must seamlessly run on a laptop, desktop, a small cluster, or on a large data center without the author having to worry about the details of parallelization, synchronization, or fault-tolerance. We have built several services (Dryad, DryadLINQ, TidyFS, and Nectar) that embody this belief. Our goal is to enable users, particularly scientists of all disciplines, to treat a computer cluster as a forensic, diagnostic, or analytic tool. The talk will describe the details of our infrastructure and the characteristics of some of the applications that have been run on it.

Chandu Thekkath is a researcher at Microsoft Research Silicon Valley. He received his Ph.D. in Computer Science from the University of Washington in 1994. Since then, except for a sabbatical year at Stanford in 2000, he has been in industrial research labs at DEC, Compaq, and Microsoft. He is a fellow of the ACM.

Follow us: Facebook Twitter Linkedin