Quick links

Colloquium

Disappearing Security

Date and Time
Monday, September 23, 2002 - 12:30pm to 2:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Dirk Balfanz, from Xerox PARC
Host
Edward Felten
The Ubiquitous Computing community has been touting the "disappearing computer" for some time now. What they mean is not that there should be no more computers. Rather, they believe that small, unabtrusive computers will be so ubiquitous (think smart wallpaper) that we won't even notice them.

In my talk, I will argue that security mechanisms need to "disappear", in the same sense, for two reasons: First, to secure the "disappearing computer" we need to come up with novel, unobrtusive security mechanisms. Second "disappearing" security can actually make things more secure, because it gives users less oppotunity to get things wrong. I will give a few examples of recent projects at PARC that have been following this notion of "disappearing security".

Towards More Error-Tolerant Internet Protocols

Date and Time
Wednesday, April 24, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
David Wetherall, from University of Washington
Host
Larry Peterson
The Internet protocols were designed from the start to tolerate failures, and they have proved exceedingly resilient to fiber cuts, earthquakes, router meltdowns, and so forth. Yet not all faults are alike. Software failures, whether due to implementation bugs, incorrect operation or deliberate attack, have occasionally wreaked havoc in which the party at fault damages not only themselves, but also potentially large regions of the Internet. We believe that software failures need to be addressed at the protocol design stage, and that only by doing so can we build a network on which we can depend. Understanding how to do this is a work in progress. In this talk, we will autopsy several protocols that proved surprisingly vulnerable to software failures, and describe improved designs that are less fragile. To work towards more robust protocols, we then abstract from these and other examples and speculate on design techniques that can be used to harden protocols.

Cryptography and Security, Theory and Practice

Date and Time
Tuesday, April 23, 2002 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Benny Pinkas, from STAR Lab Intertrust Technologies
Host
Robert Tarjan
Research in cryptography and security has an incredible potential for fruitful interaction between theory and practice. The case of public-key cryptography may be considered an example of a successful transition from theory to practice. Elsewhere, there has been less interaction between the fields. For instance, cryptographers have devised procedures for performing many seemingly impossible tasks, using zero-knowledge proofs and protocols for secure function evaluation; however none of these procedures is in every-day use.

This talk describes a body of research that bridges this gap, presenting two results in detal. The first is a protocol involving two parties, each one holding a private database. The aim of the protocol is to compute the well-known ID3 data-mining algorithm on the union of the databases, correctly computing the result without revealing any other information about the two databases. The result is unique in showing that an existing complex algoithm can be implemented in the style of theorists' "secure function evaluation". The protocol has sub-linear overhead, and can be applied to databses containing millions of transactions.

The second result is a Secure Human Input Protocol (SHIP), which enables human users to encrypt messages (e.g. passwords) in a way that defeats attacks by automated eavesdropping adversaries, and requires no computational aids for the users. The protocol makes novel use of several techniques that attempt to distinguish between a human user and a computer program. We give precise reductions from the security of our protocol to that of the distinguishing techniques that we use.

Parallelizing Programs using Approximate Code

Date and Time
Wednesday, April 17, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Craig Zilles, from University of Wisconsin
Host
David August
Chip multi-processors (CMP), where a processor core is replicated multiple times on a single chip, are widely recognized as a means to exploit the exponentially growing transistor budgets predicted by Moore's Law, while respecting design complexity and physical constraints. While some workloads (e.g., server, scientific) already exhibit sufficient thread-level parallelism, most software does not. For these other programs, the additional complexity of manual parallelization is unattractive, and traditional automatic parallelization techniques have yet to be widely adopted. In this talk, I will present two hardware/software techniques to exploit thread-parallel resources to improve sequential program performance. The first technique uses "helper threads" to avoid stalls (due to cache misses and branch mispredictions) that would be incurred by a single-threaded execution of the program. The second technique uses a "master processor" to compute future checkpoints of program state to achieve a (speculative) parallel execution of the program. Both techniques exploit a common theme: the use of approximate versions of code to generate accurate value predictions.

Estimation Problems in Machine Learning

Date and Time
Wednesday, April 10, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Peter Bartlett, from BIOwulf Technologies and Australian National University
Host
Bernard Chazelle
Statistical estimation problems arise naturally in many areas of machine intelligence. In this talk, we consider problems of this kind from two areas of machine learning: supervised learning, where the goal is to learn to make decisions in an i.i.d. setting, and reinforcement learning, where the goal is to learn to make a sequence of related decisions. In prediction problems, such as pattern classification and regression, estimates of prediction error are important for the analysis and design of learning algorithms. We first review classical error estimates, and then describe more recent `large margin' estimates, which give a better explanation of the success of some of the most popular pattern classification techniques. All of these estimates measure the complexity of a function class without exploiting any information about the process that generated the data. We describe recent work on data-dependent error estimates, which can be much more accurate because they use the training data to capture important properties of this process. In reinforcement learning problems, an agent chooses actions to take in some environment, aiming to maximize a reward function. Many control, scheduling, optimization and game-playing tasks can be formulated in this way. Policy gradient methods consider agents that are restricted to some set of policies, and aim to move through policy space so as to improve the performance of the agent. The central problem for such methods is that of estimating performance gradients. We present algorithms for this problem, and show how their performance depends on properties of the controlled system, such as mixing times.

Learning Theory and Problems of Statistics

Date and Time
Wednesday, April 3, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Vladimir Vapnik, from NEC Research Institute
Host
Bernard Chazelle
In this talk I will discuss the main principles of inference from data developed in statistical learning theory which consider in non-parametric framework the problem of inference from a given (fixed) sample size.

Along with a survey of new ideas of statistical learning theory and their comparison to classical statistical approaches, I will discuss the problem of constructing learning machines that implement these new ideas.

Automatic Tools for Building Secure Systems

Date and Time
Wednesday, March 27, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Dawn Song, from UC Berkeley
Host
David Walker
Building a secure system is a complex and error-prone process in computing. System designers and developers face many challenges: What does it mean for a system to be secure? How do I know whether my system is secure? Will the security of my system break if I add a new component? SSL/TLS and other standard protocols do not work in my scenario. How can I find ways to achieve security in my system efficiently? How can I make sure the implementation adheres to the design? Finding answers to these questions is essential to building secure, efficient systems; however, the complexity, subtlety, and interactions among different components in a large system put the problem beyond the reach of even experienced security experts, not to mention average programmers who lack security expertise. As a result, the current design and implementation process for secure systems is slow, expensive, and often results in a vulnerable system. In my thesis, I propose a new automatic approach for building security protocols. In particular, I designed and built a suite of automatic tools, Athena, containing three components: 1) APV: an Automatic Protocol Analyzer; 2) APG: an Automatic Protocol Generator; 3) ACG: an Automatic Code Generator. In this talk, I'll discuss how this toolkit enables a new automatic approach for building security protocols that is more efficient, economical, and with higher security guarantee than the current approach.

Event (no name)

Date and Time
Wednesday, March 20, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
TBD
Host
Andrew Appel

A Signal-Processing Framework for Forward and Inverse Rendering

Date and Time
Wednesday, March 13, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Ravi Ramamoorthi, from Stanford University
Host
Thomas Funkhouser
Understanding the nature of reflection and illumination is important in many areas of computer graphics and computer vision. In this talk, I describe a new way of looking at reflection on a curved surface, as a special type of convolution of the incident illumination and the reflective properties of the surface (technically, the bi-directional reflectance distribution function or BRDF). We formalize these notions by deriving a convolution theorem in terms of the spherical harmonic coefficients of the lighting and BRDF. This allows us to introduce a signal-processing framework for reflection, wherein the incident lighting is the signal, the BRDF is the filter, and the reflected light is obtained by filtering the input illumination (signal) using the frequency response of the BRDF filter. I will demonstrate applications in two areas. First, we show how our framework can be used for computing and displaying synthetic images in real-time with natural illumination and physically-based BRDFs. We will call this the "forward rendering" or the convolution problem. Next, we extend and apply our framework to estimating realistic lighting and reflective properties from photographs, and show how this approach can be used to synthesize very realistic images under novel lighting and viewing conditions. We will call this the "inverse rendering" or the deconvolution problem. In my talk, I will first describe the theoretical framework, and then discuss the above two applications.

Programming for Pervasive Computing Environments

Date and Time
Tuesday, March 12, 2002 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Robert Grimm, from University of Washington
Host
Chong Wang
Programming for Pervasive Computing Environments Pervasive computing provides an attractive vision for the future of computing. Computing power will be available everywhere. Mobile and stationary devices will dynamically connect and coordinate to seamlessly help people accomplish their tasks. However, for this vision to become a reality, programmers must build applications that constantly adapt to an environment in which people, devices, and services keep coming and going. In my talk, I will explore how to make the programmers' task feasible. I will discuss how to structure necessary systems support and how programming for highly dynamic computing environments impacts application development. I will introduce one.world, a system that I have built, and reflect on my own and other's experiences with it, which demonstrate the practicality of building and deploying pervasive applications. Robert Grimm University of Washington http://www.cs.washington.edu/homes/rgrimm
Follow us: Facebook Twitter Linkedin