Quick links

Colloquium

Recovery Oriented Computing (ROC)

Date and Time
Tuesday, October 8, 2002 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
David Patterson, from UC Berkeley
Host
Robert Tarjan
It is time to broaden our performance-dominated research agenda. A four order of magnitude increase in performance over 20 years means that few outside the CS&E research community believe that speed is the only problem of computer hardware and software. If we don't change our ways, our legacy may be cheap, fast, and flaky.

Recovery Oriented Computing (ROC) takes the perspective that hardware faults, software bugs, and operator errors are facts to be coped with, not problems to be solved. By concentrating on Mean Time to Repair rather than Mean Time to Failure, ROC reduces recovery time and thus offers higher availability. Since a large portion of system administration is dealing with failures, ROC may also reduce total cost of ownership. ROC principles include design for fast recovery, extensive error detection and diagnosis, systematic error insertion to test emergency systems, and recovery benchmarks to measure progress.

If we embrace availability and maintainability, systems of the future may compete on recovery performance rather than just processor performance, and on total cost of ownership rather than just system price. Such a change may restore our pride in the systems we craft.

Disappearing Security

Date and Time
Monday, September 23, 2002 - 12:30pm to 2:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Dirk Balfanz, from Xerox PARC
Host
Edward Felten
The Ubiquitous Computing community has been touting the "disappearing computer" for some time now. What they mean is not that there should be no more computers. Rather, they believe that small, unabtrusive computers will be so ubiquitous (think smart wallpaper) that we won't even notice them.

In my talk, I will argue that security mechanisms need to "disappear", in the same sense, for two reasons: First, to secure the "disappearing computer" we need to come up with novel, unobrtusive security mechanisms. Second "disappearing" security can actually make things more secure, because it gives users less oppotunity to get things wrong. I will give a few examples of recent projects at PARC that have been following this notion of "disappearing security".

Towards More Error-Tolerant Internet Protocols

Date and Time
Wednesday, April 24, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
David Wetherall, from University of Washington
Host
Larry Peterson
The Internet protocols were designed from the start to tolerate failures, and they have proved exceedingly resilient to fiber cuts, earthquakes, router meltdowns, and so forth. Yet not all faults are alike. Software failures, whether due to implementation bugs, incorrect operation or deliberate attack, have occasionally wreaked havoc in which the party at fault damages not only themselves, but also potentially large regions of the Internet. We believe that software failures need to be addressed at the protocol design stage, and that only by doing so can we build a network on which we can depend. Understanding how to do this is a work in progress. In this talk, we will autopsy several protocols that proved surprisingly vulnerable to software failures, and describe improved designs that are less fragile. To work towards more robust protocols, we then abstract from these and other examples and speculate on design techniques that can be used to harden protocols.

Cryptography and Security, Theory and Practice

Date and Time
Tuesday, April 23, 2002 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Benny Pinkas, from STAR Lab Intertrust Technologies
Host
Robert Tarjan
Research in cryptography and security has an incredible potential for fruitful interaction between theory and practice. The case of public-key cryptography may be considered an example of a successful transition from theory to practice. Elsewhere, there has been less interaction between the fields. For instance, cryptographers have devised procedures for performing many seemingly impossible tasks, using zero-knowledge proofs and protocols for secure function evaluation; however none of these procedures is in every-day use.

This talk describes a body of research that bridges this gap, presenting two results in detal. The first is a protocol involving two parties, each one holding a private database. The aim of the protocol is to compute the well-known ID3 data-mining algorithm on the union of the databases, correctly computing the result without revealing any other information about the two databases. The result is unique in showing that an existing complex algoithm can be implemented in the style of theorists' "secure function evaluation". The protocol has sub-linear overhead, and can be applied to databses containing millions of transactions.

The second result is a Secure Human Input Protocol (SHIP), which enables human users to encrypt messages (e.g. passwords) in a way that defeats attacks by automated eavesdropping adversaries, and requires no computational aids for the users. The protocol makes novel use of several techniques that attempt to distinguish between a human user and a computer program. We give precise reductions from the security of our protocol to that of the distinguishing techniques that we use.

Parallelizing Programs using Approximate Code

Date and Time
Wednesday, April 17, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Craig Zilles, from University of Wisconsin
Host
David August
Chip multi-processors (CMP), where a processor core is replicated multiple times on a single chip, are widely recognized as a means to exploit the exponentially growing transistor budgets predicted by Moore's Law, while respecting design complexity and physical constraints. While some workloads (e.g., server, scientific) already exhibit sufficient thread-level parallelism, most software does not. For these other programs, the additional complexity of manual parallelization is unattractive, and traditional automatic parallelization techniques have yet to be widely adopted. In this talk, I will present two hardware/software techniques to exploit thread-parallel resources to improve sequential program performance. The first technique uses "helper threads" to avoid stalls (due to cache misses and branch mispredictions) that would be incurred by a single-threaded execution of the program. The second technique uses a "master processor" to compute future checkpoints of program state to achieve a (speculative) parallel execution of the program. Both techniques exploit a common theme: the use of approximate versions of code to generate accurate value predictions.

Estimation Problems in Machine Learning

Date and Time
Wednesday, April 10, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Peter Bartlett, from BIOwulf Technologies and Australian National University
Host
Bernard Chazelle
Statistical estimation problems arise naturally in many areas of machine intelligence. In this talk, we consider problems of this kind from two areas of machine learning: supervised learning, where the goal is to learn to make decisions in an i.i.d. setting, and reinforcement learning, where the goal is to learn to make a sequence of related decisions. In prediction problems, such as pattern classification and regression, estimates of prediction error are important for the analysis and design of learning algorithms. We first review classical error estimates, and then describe more recent `large margin' estimates, which give a better explanation of the success of some of the most popular pattern classification techniques. All of these estimates measure the complexity of a function class without exploiting any information about the process that generated the data. We describe recent work on data-dependent error estimates, which can be much more accurate because they use the training data to capture important properties of this process. In reinforcement learning problems, an agent chooses actions to take in some environment, aiming to maximize a reward function. Many control, scheduling, optimization and game-playing tasks can be formulated in this way. Policy gradient methods consider agents that are restricted to some set of policies, and aim to move through policy space so as to improve the performance of the agent. The central problem for such methods is that of estimating performance gradients. We present algorithms for this problem, and show how their performance depends on properties of the controlled system, such as mixing times.

Event (no name)

Date and Time
Wednesday, November 4, 1998 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Follow us: Facebook Twitter Linkedin