Quick links

Distinguished Colloquium Series Speaker

How to Design Simple Efficient Mechanisms that are also Composable

Date and Time
Monday, November 19, 2012 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Sanjeev Arora, CS and PACM
E-commerce applications require simple, and well-designed systems, and systems that work well even if users participate in multiple mechanisms (and the value of each player overall is a complex function of their outcomes). Traditional mechanism design has considered such mechanisms only in isolation, and the mechanisms it proposes tend to be complex and impractical. In contrast, players typically participate in various mechanisms, mechanisms that are run by different principals (e.g. different sellers on eBay or different ad-exchange platforms) and coordinating them to run a single combined mechanism is infeasible or impractical. Even the simple and elegant Vickrey auction loses some of its appeal when not in isolation: when the overall value of each player is a complex function of their outcomes, the global mechanism is no longer truthful.

In this talk we initiate the study of efficient mechanism design with guaranteed good properties even when players participate in multiple different mechanisms either simultaneously or sequentially. Over the last decade, computer scientists and game theorists have developed good understanding of the impact of strategic user behavior on system performance in environments that include selfish traffic routing, service location, and bandwidth sharing. We will consider simple mechanisms from this perspective. We define smooth mechanisms (that can be thought of as mechanisms that generate approximately market clearing prices), and show that smooth mechanisms are approximately efficient, and the efficiency guarantees for smooth mechanisms (when studied in isolation) carry over to the same or approximately the same guarantees for a market composed of such mechanisms. Joint work with Vasilis Syrgkanis.

Eva Tardos is a Jacob Gould Schurman Professor of Computer Science at Cornell University, is Senior Associate Dean of Computing and Information Science, and was department chair 2006-2010. She received her PhD from Eotvos University in Budapest. She has been elected to the National Academy of Engineering and the American Academy of Arts and Sciences, and is the recipient of some fellowships and awards, including the Packard Fellowship, the Fulkerson Prize and the Dantzig prize. She was editor in chief of SIAM Journal of Computing 2003-09, and is editor of several other journals, including the Journal of the ACM, and Combinatorica. Her research interest is algorithms and algorithmic game theory, the subarea theory of designing systems and algorithms for selfish users.

What are the Right Roles for Formal Methods in High Assurance Cloud Computing?

Date and Time
Tuesday, November 13, 2012 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Michael Freedman
The developer of a high assurance system must justify the fitness of the system for the intended use. In "small" scenarios, this problem is well understood: it entails rigorously specifying properties the system must achieve, designing protocols and correctness proofs, and then implementing and testing the needed code. In scaled-out settings, complex architectures are often unavoidable and this methodology can no longer be used.

This talk focuses on how the resulting problems play out in the context of Isis2 (isis2.codeplex.com), a new system I've been building that supports a wide range of high-assurance properties for applications that will be hosted on cloud computing datacenters, and hence might need to run at very ambitious scale.

We'll see that while it is possible to create systems of these kinds, existing formal methods are ill-suited to reasoning about the resulting solutions, suggesting that additional work will be needed to bridge between the mathematical and scientific foundations of high assurance of the kinds we teach in academic settings and the real-world instances of such problems that we may face in the near future.

Ken Birman is the N. Rama Rao Professor of Computer Science at Cornell University, where he has headed a research effort in the area of high assurance distributed computing for thirty years. Ken is best known for inventing the virtual synchrony computing model and building the Isis Toolkit, which was ultimately used to build the system that operated the New York Stock Exchange for more than a decade, and the systems that continue to operate the French Air Traffic Control system and the US Navy AEGIS today. He also pioneered in the use of gossip protocols for system monitoring, management and control; several of his solutions are used today in the platforms that operate today's largest cloud computing infrastructures, notably at Amazon, IBM and Microsoft. A Fellow of the ACM since 1999, Ken won the 2009 IEEE Kanai Award for his research in distributed systems. He is currently developing online instructional materials for a short course on building high assurance cloud computing systems using his Isis2 platform.

The Future of Networking, and the Past of Protocols

Date and Time
Friday, December 2, 2011 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Jennifer Rexford
Inspired by Barbara Liskov's brilliiant Turing Lecture on "The Power of Abstraction", I will discuss the role of abstraction in networking. I will show how searching for the appropriate network control abstractions leads to a new approach for network management called Software-Defined Networking (SDN), which is sometimes mistakenly referred to as "OpenFlow". This talk will not cover any details of SDN or OpenFlow, but will instead try to place these ideas in the context of broader architectural issues.

Scott Shenker spent his academic youth studying theoretical physics but soon gave up chaos theory for computer science . Continuing to display a remarkably short attention span, his research over the years has wandered from computer performance modeling and computer networks to game theory and economics. Unable to hold a steady job, he currently splits his time between the U. C. Berkeley Computer Science Department and the International Computer Science Institute.

Fairness Through Awareness

Date and Time
Wednesday, September 21, 2011 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Sanjeev Arora
"Why was I not shown this advertisement? Why was my loan application denied? Why was I denied admission to this university?"

In this work we initiate the formal study of fairness in classification, where the goal is to prevent discrimination against protected population subgroups in classification systems while simultaneously preserving utility for the party carrying out the classification (eg, the advertiser, bank, or admissions committee).

We argue that a classification is fair only when individuals who are similar with respect to the classification task at hand are treated similarly, and this in turn requires understanding of sub cultures of the population. Similarity metrics are applied in many contexts, but these are often hidden. Our work explicitly exposes the metric, opening it to public debate.

We then formalize and show how to achieve fairness in classification, given a similarity metric. We also give conditions on the metric under which our "local" notion ensures statistical parity: namely that the demographics of those receiving any given classification are the same as the demographics of the underlying population. In a complementary setting, we propose tools for what can be viewed as "fair affirmative action." Namely, we give methods for guaranteeing statistical parity for a group while treating similar individuals as similarly as possible.

Finally, we discuss the relationship of fairness to privacy: to what extent might fairness imply privacy, and is it possible to employ tools developed in the context of differential privacy in order to obtain fairness.

Joint work with Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel.

Cynthia Dwork '79, Distinguished Scientist at Microsoft Research, is the world's foremost expert on placing privacy-preserving data analysis on a mathematically rigorous foundation. A cornerstone of this work is differential privacy, a strong privacy guarantee permitting highly accurate data analysis. Dr. Dwork has also made seminal contributions in cryptography and distributed computing, and is a recipient of the Edsger W. Dijkstra Prize, recognizing some of her earliest work establishing the pillars on which every fault-tolerant system has been built for decades. She is a member of the US National Academy of Engineering and a Fellow of the American Academy of Arts and Sciences.

Probabilistic Models for Holistic Scene Understanding

Date and Time
Wednesday, October 6, 2010 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
David Blei
Over recent years, computer vision has made great strides towards annotating parts of an image with symbolic labels, such as object categories or segment types. However, we are still far from the ultimate goal of providing a semantic description of an image, such as "a man, walking a dog on a sidewalk, carrying a backpack". In this talk, I will describe our work in this direction, which uses machine learning to construct richly structured, probabilistic models of multiple scene components. We demonstrate the value of such modeling for improvements in basic tasks such as image segmentation and object detection, as well as for making more semantic distinctions regarding shape and activity. The learning of such expressive models poses new challenges, especially when available training data is limited or only weakly labeled. I will describe novel machine learning methods that can train models using distantly or weakly labeled data, thereby making use of much larger amounts of available data.

Daphne Koller received her BSc and MSc degrees from the Hebrew University of Jerusalem, Israel, and her PhD from Stanford University in 1993. After a two-year postdoc at Berkeley, she returned to Stanford, where she is now a Professor in the Computer Science Department. Her main research interest is in developing and using machine learning and probabilistic methods to model and analyze complex domains. Her current research projects include models in computational biology and in extracting semantic meaning from sensor data of the physical world. Daphne Koller is the author of over 150 refereed publications, which have appeared in venues spanning Science, Nature Genetics, the Journal of Games and Economic Behavior, and a variety of conferences and journals in AI and Computer Science. She has received 9 best paper or best student paper awards, in conferences whose areas span computer vision (ECCV), artificial intelligence (IJCAI), natural language (EMNLP), machine learning (NIPS and UAI), and computational biology (ISMB). She has given keynote talks at over 10 different major conferences, also spanning a variety of areas. She was the program co-chair of the NIPS 2007 and UAI 2001 conferences, and has served on numerous program committees and as associate editor of the Journal of Artificial Intelligence Research and of the Machine Learning Journal. She was awarded the Arthur Samuel Thesis Award in 1994, the Sloan Foundation Faculty Fellowship in 1996, the ONR Young Investigator Award in 1998, the Presidential Early Career Award for Scientists and Engineers (PECASE) in 1999, the IJCAI Computers and Thought Award in 2001, the Cox Medal for excellence in fostering undergraduate research at Stanford in 2003, the MacArthur Foundation Fellowship in 2004, the ACM/Infosys award in 2008, and the Rajeev Motwani Endowed Chair in 2010.

Amdahl's Law in the Multicore Era

Date and Time
Tuesday, November 30, 2010 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Margaret Martonosi
Over the last several decades computer architects have been phenomenally successful turning the transistor bounty provided by Moore's Law into chips with ever increasing single-threaded performance. During many of these successful years, however, many researchers paid scant attention to multiprocessor work. Now as vendors turn to multicore chips, researchers are reacting with more papers on multi-threaded systems. While this is good, we are concerned that further work on single-thread performance will be squashed.

To help understand future high-level trade-offs, we develop a corollary to Amdahl's Law for multicore chips [Hill& Marty, IEEE Computer 2008]. It models fixed chip resources for alternative designs that use symmetric cores, asymmetric cores, or dynamic techniques that allow cores to work together on sequential execution. Our results encourage multicore designers to view performance of the entire chip rather than focus on core efficiencies. Moreover, we observe that obtaining optimal multicore performance requires further research BOTH in extracting more parallelism and making sequential cores faster.

This talk is based on an HPCA 2008 keynote address.

Mark D. Hill (http://www.cs.wisc.edu/~markhill) is professor in both the Computer Sciences Department and the Electrical and Computer Engineering Department at the University of Wisconsin-Madison, where he also co-leads the Wisconsin Multifacet project with David Wood. He earned a Ph.D. from the University of California, Berkeley. He is an ACM Fellow and a Fellow of the IEEE. His past work ranges from refining multiprocessor memory consistency models to developing the 3C model of cache behavior (compulsory, capacity, and conflict misses).

The Power of Abstraction

Date and Time
Wednesday, March 10, 2010 - 4:30pm to Tuesday, March 10, 2009 - 5:30pm
Location
Friend Center Convocation Room
Type
Distinguished Colloquium Series Speaker
Speaker
Host
Michael Freedman
Abstraction is at the center of much work in Computer Science. It encompasses finding the right interface for a system as well as finding an effective design for a system implementation. Furthermore, abstraction is the basis for program construction, allowing programs to be built in a modular fashion. This talk will discuss how the abstraction mechanisms we use today came to be, how they are supported in programming languages, and some possible areas for future research.

Barbara Liskov is an Institute Professor at MIT and head of the Programming Methodology Group. Liskov's research interests lie in programming methodology, programming languages and systems, and distributed computing. Major projects include: the design and implementation of CLU, the first language to support data abstraction; the design and implementation of Argus, the first high-level language to support implementation of distributed programs; and the Thor object- oriented database system, which provides transactional access to persistent, highly-available objects in wide-scale distributed environments. Her current research interests include Byzantine-fault-tolerant storage systems, peer-to-peer computing, and support for automatic deployment of software upgrades in large-scale distributed systems.

Liskov is a member of the National Academy of Engineering, and a fellow of the American Academy of Arts and Sciences, and the Association for Computer Machinery. She received the ACM Turing Award in 2009, the ACM SIGPLAN Programming Language Achievement Award in 2008, the IEEE Von Neumann medal in 2004, a lifetime achievement award from the Society of Women Engineers in 1996, and in 2003 was named one of the 50 most important women in science by Discover Magazine.

Behind the Scenes at Pixar

Date and Time
Wednesday, March 3, 2010 - 4:30pm to 5:30pm
Location
Computer Science Large Auditorium (Room 104)
Type
Distinguished Colloquium Series Speaker
Speaker
Rob Cook, from Pixar
Host
Adam Finkelstein
This talk takes you behind the scenes at Pixar Animation Studios for a look at how its 3D computer graphics films are made. The process starts with the development of the story and continues with modeling the geometry, animating the characters, simulating things like water and cloth and hair, defining the look of the surfaces, putting lights in the scene, and rendering the images. Making a computer animated film requires a close collaboration between artists and technical experts in many areas of expertise and is a great example of the value of bringing different disciplines together.

Rob was the co-architect and primary author of RenderMan, software that creates photo-realistic computer images. Every film nominated for a Visual Effects Oscar in the last 15 years has used RenderMan. In 2001, along with two colleagues, Rob received an Oscar for his contributions, the first ever given for software. He has also received the Steven A. Coons Award for lifetime achievement in computer graphics and was inducted into the National Academy of Engineering. Rob has a Bachelor's degree in Physics from Duke University and a Master's in Computer Graphics from Cornell University. At Cornell, he worked on simulating realistic surfaces, taking computer-generated images beyond the distinctive plastic look they had at the time. In 1981, he joined Lucasfilm / Pixar, where he developed the first programmable shader, which is now an essential part of GPUs and game engines. He was the first to use Monte Carlo techniques in computer graphics, which was essential for the simulation of complex, realistic lights and camera effects. The latter proved particularly important in the special effects industry, because it allowed computer-generated imagery to match the motion blur and depth of field of the live-action footage with which it was combined. He is currently the Vice President of Advanced Technology at Pixar.

RAMCloud: Scalable High-Performance Storage Entirely in DRAM

Date and Time
Wednesday, February 10, 2010 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Brian Kernighan
Disk-oriented approaches to online storage are becoming increasingly problematic: they do not scale gracefully to meet the needs of new large-scale Web applications, and improvements in disk capacity have out-stripped improvements in access speed. In this talk I will describe a new approach to datacenter storage called RAMCloud, where information is kept entirely in DRAM and large-scale systems are created by aggregating the main memories of thousands of commodity servers. A RAMCloud can provide durable and available storage with 100-1000x the throughput of disk-based systems and 100-1000x lower access latency. By combining low latency and large scale, RAMClouds will enable a new class of applications that manipulate large datasets more intensively than has ever been possible before.

Fleet, Infinity & Marina

Date and Time
Tuesday, September 22, 2009 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Speaker
Ivan Sutherland, from Portland State University
Host
Douglas Clark
This talk describes a radically different architecture for computing called Fleet. Fleet accepts the limitations to computing imposed by physics: moving data costs more energy, more delay, and more chip area than the arithmetic and logical operations ordinarily called "computing." Fleet puts the programmer firmly in charge of the most costly resource: communication. Fleet treats arithmetic and logical operations as side effects of where the programmer sends data.

Fleet achieves high performance through fine grain concurrency. Everything Fleet does is concurrent at the lowest level; programmers who wish sequential behavior must program it explicitly. Fleet presents a stark contrast to today's multi-core machines in which programmers seek concurrency in an inherently sequential environment.

The Fleet architecture uses a uniform switch fabric to simplify chip design. A few thousand identical copies of a configurable interface will connect a thousand or so repetitions of basic arithmetic, logical, input-output, and storage units to the switch fabric. The uniform switch fabric and the identical configurable interfaces will simplify many of the hard parts of designing the computing elements themselves.

Both software and FPGA simulators of a Fleet system are available at UC Berkeley. Berkeley students have written a variety of Fleet programs; their work helped to define what the configurable interface between computing and communication must do. A simple compiler configures both source and destination to provide flow-controlled communication. We expect work on a higher-level language for Fleet to appear soon as a Berkeley PhD dissertation.

Last year we built a 90 nanometer TSMC test chip, called Infinity, at Sun Microsystems. Infinity demonstrated the switch fabric running at about 4 GHz. We now have a new test chip, called Marina, also in 90-nanometer TSMC sponsored by Sun. Marina shows correct operation of the configurable switch fabric interface. Together Infinity and Marina give us confidence to build a complete Fleet. We seek participation from sponsors, computer scientists, and hardware designers.

BIO:
Ivan Sutherland is a Visiting Scientist at Portland State University where he and Marly Roncken have recently established the "Asynchronous Research Center" (ARC). The ARC occupies both physical and intellectual space half way between the Computer Science (CS) and Electrical and Computer Engineering (ECE) departments at the university. The ARC seeks to free designers from the tyranny of the clock by developing better tools and teaching methods for design of self-timed systems. Prior to moving to Portland, Ivan spent 25 years as a Fellow at Sun Microsystems. A 1959 graduate of Carnegie Tech, Ivan got his PhD at MIT in 1963 and has taught at Harvard, The University of Utah, and Caltech. Ivan is a member of the National Academy of Engineering and the National Academy of Sciences.

Follow us: Facebook Twitter Linkedin