Quick links

Colloquium

Matching algorithms and hardware: How a brain 'computes' so well

Date and Time
Wednesday, December 4, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
John Hopfield, from Princeton
Host
Sanjeev Arora
When available 'device physics' or 'device biophysics' can be directly used in an algorithm, a small quantity of computing hardware can be astonishingly effective. Evolution makes such solutions available for neurobiological computations. I will describe 'biological' models of such effective systems in an engineering perspective, to give insight into how a brain may do some of its computations.

New Tools in Cryptography

Date and Time
Monday, November 25, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Dan Boneh, from Stanford
Host
Sanjeev Arora
Over the past three years we have seen a number of exciting new cryptographic constructions based on certain bilinear maps. In this talk we will survey some of these new constructions and their applications. For example, bilinear maps give rise to a new digital signature scheme with remarkable properties: it enables aggregation of many signatures which can be used to reduce communication in secure routing protocols and shrink certificate chains. Bilinear maps have also been used to construct the first practical public key encryption scheme where public keys can be arbitrary strings. In this talk we will survey some of the cryptographic applications of bilinear maps and describe several open problems in this area. The talk will be self contained.

Hourglass: An Architecture for Pervasive Applications

Date and Time
Thursday, November 14, 2002 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Margo Seltzer, from Harvard
Host
Larry Peterson
Mainframes to mini-computers to workstations to PCs to PDAs. What comes next in the miniaturization of computing? Theanswer lies in networks of tiny computational devices, potentially equipped with sensors, embedded in the world around us. This area is rich in problems from the theoretical to the applied. In this talk, I will present our vision of pervasive applications, what they'll look like and what infrastructure we must provide to enable this next generation in computing.

Proof Tools and Correct Program Development

Date and Time
Monday, November 4, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Aaron Stump, from Washington University in St. Louis
Host
Andrew Appel
A new approach to developing correct programs is proposed. The goal of the approach is to all developers to translate some of their intuitive ideas of why their code is correct into something machine checkable, like a proof fragment or partial proof. These partial proofs are to be integrated with the program text and checked at compile time.

As a modest step towards this goal, extended assertions are proposed. Extended assertions allow developers to say "If these assertions hold at these program points, then this other assertion holds at this program point." A simple implementation is demonstrated for a minimal imperative programming language without recursion or loops. Programs in this language together with their extended assertions are compiled into logical formulas, which can be checked automatically by the CVC (Coorperating Validity Checker") system. An overview of CVC follows. The talk concludes with a look at Rogue, which is an experimental programming language for manipulating expressions. The compiler from the recursion-free language to CVC's logic is written in Rogue, and Rogue is being considered for writing parts of CVC itself.

Partitioned Applications in Protium

Date and Time
Wednesday, October 16, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Cliff Young, from Bell Labs
Host
Brian Kernighan
_Protium_ uses a partitioned application approach to provide universal data service: making traditional desktop applications available on any Internet-connected device. We partition applications into viewers (which run on the device near the user), services (which are accessed through the network and run in managed environments), and the application-specific protocols that connect them. The goal of using a partitioned architecture is to provide consistent, responsive, remote access on a wide variety of devices, where the devices are reliabily networked but may have limited communications or computational abilities. We particularly focus on hiding communication latency in a connected environment.

The network between viewers and services provides a new place to put infrastructure; Protium includes system infrastructure that supports multiple viewers sharing a service, disconnection and reconnection, name lookup, security, session management, and device adaptation.

Recovery Oriented Computing (ROC)

Date and Time
Tuesday, October 8, 2002 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
David Patterson, from UC Berkeley
Host
Robert Tarjan
It is time to broaden our performance-dominated research agenda. A four order of magnitude increase in performance over 20 years means that few outside the CS&E research community believe that speed is the only problem of computer hardware and software. If we don't change our ways, our legacy may be cheap, fast, and flaky.

Recovery Oriented Computing (ROC) takes the perspective that hardware faults, software bugs, and operator errors are facts to be coped with, not problems to be solved. By concentrating on Mean Time to Repair rather than Mean Time to Failure, ROC reduces recovery time and thus offers higher availability. Since a large portion of system administration is dealing with failures, ROC may also reduce total cost of ownership. ROC principles include design for fast recovery, extensive error detection and diagnosis, systematic error insertion to test emergency systems, and recovery benchmarks to measure progress.

If we embrace availability and maintainability, systems of the future may compete on recovery performance rather than just processor performance, and on total cost of ownership rather than just system price. Such a change may restore our pride in the systems we craft.

Disappearing Security

Date and Time
Monday, September 23, 2002 - 12:30pm to 2:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Dirk Balfanz, from Xerox PARC
Host
Edward Felten
The Ubiquitous Computing community has been touting the "disappearing computer" for some time now. What they mean is not that there should be no more computers. Rather, they believe that small, unabtrusive computers will be so ubiquitous (think smart wallpaper) that we won't even notice them.

In my talk, I will argue that security mechanisms need to "disappear", in the same sense, for two reasons: First, to secure the "disappearing computer" we need to come up with novel, unobrtusive security mechanisms. Second "disappearing" security can actually make things more secure, because it gives users less oppotunity to get things wrong. I will give a few examples of recent projects at PARC that have been following this notion of "disappearing security".

Towards More Error-Tolerant Internet Protocols

Date and Time
Wednesday, April 24, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
David Wetherall, from University of Washington
Host
Larry Peterson
The Internet protocols were designed from the start to tolerate failures, and they have proved exceedingly resilient to fiber cuts, earthquakes, router meltdowns, and so forth. Yet not all faults are alike. Software failures, whether due to implementation bugs, incorrect operation or deliberate attack, have occasionally wreaked havoc in which the party at fault damages not only themselves, but also potentially large regions of the Internet. We believe that software failures need to be addressed at the protocol design stage, and that only by doing so can we build a network on which we can depend. Understanding how to do this is a work in progress. In this talk, we will autopsy several protocols that proved surprisingly vulnerable to software failures, and describe improved designs that are less fragile. To work towards more robust protocols, we then abstract from these and other examples and speculate on design techniques that can be used to harden protocols.

Cryptography and Security, Theory and Practice

Date and Time
Tuesday, April 23, 2002 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Benny Pinkas, from STAR Lab Intertrust Technologies
Host
Robert Tarjan
Research in cryptography and security has an incredible potential for fruitful interaction between theory and practice. The case of public-key cryptography may be considered an example of a successful transition from theory to practice. Elsewhere, there has been less interaction between the fields. For instance, cryptographers have devised procedures for performing many seemingly impossible tasks, using zero-knowledge proofs and protocols for secure function evaluation; however none of these procedures is in every-day use.

This talk describes a body of research that bridges this gap, presenting two results in detal. The first is a protocol involving two parties, each one holding a private database. The aim of the protocol is to compute the well-known ID3 data-mining algorithm on the union of the databases, correctly computing the result without revealing any other information about the two databases. The result is unique in showing that an existing complex algoithm can be implemented in the style of theorists' "secure function evaluation". The protocol has sub-linear overhead, and can be applied to databses containing millions of transactions.

The second result is a Secure Human Input Protocol (SHIP), which enables human users to encrypt messages (e.g. passwords) in a way that defeats attacks by automated eavesdropping adversaries, and requires no computational aids for the users. The protocol makes novel use of several techniques that attempt to distinguish between a human user and a computer program. We give precise reductions from the security of our protocol to that of the distinguishing techniques that we use.

Parallelizing Programs using Approximate Code

Date and Time
Wednesday, April 17, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Craig Zilles, from University of Wisconsin
Host
David August
Chip multi-processors (CMP), where a processor core is replicated multiple times on a single chip, are widely recognized as a means to exploit the exponentially growing transistor budgets predicted by Moore's Law, while respecting design complexity and physical constraints. While some workloads (e.g., server, scientific) already exhibit sufficient thread-level parallelism, most software does not. For these other programs, the additional complexity of manual parallelization is unattractive, and traditional automatic parallelization techniques have yet to be widely adopted. In this talk, I will present two hardware/software techniques to exploit thread-parallel resources to improve sequential program performance. The first technique uses "helper threads" to avoid stalls (due to cache misses and branch mispredictions) that would be incurred by a single-threaded execution of the program. The second technique uses a "master processor" to compute future checkpoints of program state to achieve a (speculative) parallel execution of the program. Both techniques exploit a common theme: the use of approximate versions of code to generate accurate value predictions.
Follow us: Facebook Twitter Linkedin