Quick links

Talk

Data Retention - A Threat to Online Anonymity? Theory and Empirical Facts

Date and Time
Thursday, February 25, 2010 - 12:30pm to 1:30pm
Location
Sherrerd Hall 306
Type
Talk
The European Union's recent legislation on data retention is highly controversial and perceived as threat to an array of individual freedoms. Its technical provisions cover all kinds of communications providers, including proxy servers. This justifies a closer look at how data retention affects the ability to deploy mix-based anonymity services and what level of security can still be provided legally. Rainer will talk in particular about the situation in Germany and the experience gained from the popular cascade-based web anonymizer AN.ON, which is operated at his former university TU Dresden. Citing empirical data collected from AN.ON, new "legal adversary models" are introduced, calibrated, and assessed. While the overall situation for online anonymity is not too bad under current law, an outlook on possible future developments and adequate technological responses is given.

Rainer is a postdoctoral fellow in the Networking Group of the International Computer Science Institute in Berkeley, supported by the German Academic Exchange Service (DAAD). His research interests include privacy-enhancing technologies, economics of privacy and information security, and multimedia forensics. He holds an M.A. degree in Communication Science and Economics, and a PhD in Computer Science, both from Technische Universitaet Dresden in Germany. Before he obtained his PhD, he worked in the European Central Bank, where he served in the directorates for economics and financial stability and supervision.

Distributed Compact Routing

Date and Time
Thursday, February 4, 2010 - 3:00pm to 4:00pm
Location
Computer Science 302
Type
Talk
Speaker
Host
Alex Fabrikant
Common wisdom is that the way to scale networks to large size is hierarchy: routing is performed on higher level aggregates across domains, separately from routing within domains. We live with the consequences: inflated path lengths, and the use of location-dependent addresses (IP addresses in the Internet) which complicate management, mobility, and multihoming.

I will present Distributed Compact Routing, a protocol which (1) guarantees scalability, in the form of roughly \\\\sqrt{n} state for a network of n nodes; (2) guarantees approximately-shortest paths, in the form of constant stretch; and (3) routes directly on flat, location-independent identifiers. Our work builds on recent theoretical advances in the area of compact routing. Past attempts to translate these centralized algorithms to practical distributed protocols have had limited success, compromising state or stretch guarantees or assuming special topologies. Our results thus represent a significant step forward in routing technology, which we believe has direct applicability to many kinds of networks including peer-to-peer networks, Internet routing, and content-centric networks.

Joint work with Ankit Singla, Kevin Fall, Gianluca Iannaccone, and Sylvia Ratnasamy.

Managing Sensor Network Resource Usage and Monitoring Active Volcanoes

Date and Time
Tuesday, December 15, 2009 - 12:30pm to 1:30pm
Location
Computer Science 402
Type
Talk
Speaker
Geoffrey Challen, from Harvard University
Host
Jennifer Rexford
Sensor networks composed of large numbers of self-organizing embedded devices are an increasingly valuable tool for understanding our world. Deployed networks allow scientists to observe phenomena at a scale and resolution that challenge existing instrumentation. Some call this new instrument the macroscope. My project uses sensor networks to monitor active volcanoes. Due to the high data rates and stringent fidelity requirements of this application, providing output suitable for scientific analysis requires carefully directing the limited resources available at each node. In this talk I will present Lance, a general approach to bandwidth and energy management targeting reliable data collection for sensor networks. By combining an application-level determination of value with a system-level estimation of cost, Lance maximizes the value of the data returned to the application by optimally allocating bandwidth and energy devoted to signal collection. Lance\\'s design decouples data collection policy from mechanism, allowing its optimization metrics to be customized to suit a variety of application goals. I will motivate and describe the Lance architecture, present results from the lab and the field, and discuss continuing efforts in this area, including single-node and network-wide architectures for distributed energy management.

Bio: Geoffrey Challen (ne Werner-Allen) is a Ph.D. Candidate in Computer Science at the Harvard University School of Engineering and Applied Sciences, advised by Matt Welsh. His research addresses the systems and networking challenges necessary to enable high-fidelity sensing applications, focusing specifically on maximizing the usage of the limited resources available to sensor network nodes. Working with geoscientists, he has helped perform three sensor network deployments on active Ecuadorean volcanoes. He built and maintains MoteLab, a wireless sensor network testbed used by researchers worldwide, and is a co-editor of a forthcoming book on sensor network deployments. Geoffrey is a 2009 Siebel Fellow, and a Resident Tutor at Eliot House.

Interdomain Routing: Stability, Security and Selfishness

Date and Time
Wednesday, December 16, 2009 - 11:00am to 12:00pm
Location
Computer Science 402
Type
Talk
Speaker
Michael Schapira, from Yale University and UC Berkeley
Host
Jennifer Rexford
The Border Gateway Protocol (BGP) establishes routes between the many independently-administered networks that make up the Internet. Over the past two decades there has been exponential growth in the scale and complexity of the Internet. However, BGP has not changed significantly in comparison and, consequently, does not always cope well with modern-day challenges (bounded computational resources, economically driven manipulations, security attacks, and more). Understanding, "fixing" and redesigning interdomain routing necessitates taking a principled approach that bridges theory and systems research, and breaks traditional disciplinary barriers. I shall present conceptual frameworks for addressing today's challenges and novel routing schemes that improve on the existing ones. Specifically, I shall present (1) a necessary condition for BGP safety, i.e., guaranteed BGP convergence to a "stable" routing outcome; (2) an economic approach to BGP security; and (3) Neighbor-Specific BGP -- a modest extension to BGP that allows network administrators more expressive routing policies while improving global network stability. I shall also discuss the surprising implications of these results for other areas of research: distributed computing, game theory and mechanism design. Finally, I shall outline interesting directions for future work.

Stable Internet Routing Without Global Coordination

Date and Time
Thursday, December 10, 2009 - 4:30pm to 5:30pm
Location
Sherrerd Hall 101
Type
Talk
Host

Incentive Compatibility and Dynamics of Internet Protocols

Date and Time
Tuesday, November 24, 2009 - 3:00pm to 4:00pm
Location
Computer Science 402
Type
Talk
Speaker
Aviv Zohar, from Hebrew University
Host
Jennifer Rexford
The internet, a large and distributed system, is not controlled by a single economic entity but rather by multiple agents, each with its own agenda. From this point of view, internet protocols are merely recommendations that agents can choose to ignore if they have anything to gain by doing so. Protocols must therefore be designed to include the right incentives, or agents may decide not to follow them. The talk will mainly focus on a model of congestion control in networks, where the interested agents are the end-hosts who try to maximize their throughput. In this setting, the packet dropping policies of the routers greatly affect the incentives of agents and the convergence properties of the network. I will discuss conditions under which congestion control schemes can be both efficient, so that capacity is not wasted, and incentive compatible, so that each participant can maximize its utility by following the prescribed protocol. As in other protocols, questions of incentive compatibility are often intrinsically linked to the convergence of the network dynamics when agents follow the protocol, and the understanding of one issue goes hand in hand with the understanding of the other.

Based on work with Brighten Godfrey, Hagay Levin, Jeff Rosenschein, Rahul Sami, Michael Schapira, and Scott Shenker.

Highly Available Byzantine Fault Tolerant Distributed Systems

Date and Time
Tuesday, December 1, 2009 - 12:30pm to 1:30pm
Location
Computer Science 402
Type
Talk
Speaker
Atul Singh, from NEC Labs (Princeton)
Host
Michael Freedman
Many distributed services are hosted at large, shared, geographically diverse data centers, and they use replication to achieve high availability despite the unreachability of an entire data center. Recent events show that non-crash faults occur in these services and may lead to long outages, for example, Amazon's S3 service was down for at least 7 hours recently due to a Byzantine fault in their servers. While Byzantine-Fault Tolerance (BFT) could be used to withstand these faults, current BFT protocols can become unavailable if a small fraction of their replicas are unreachable. This is because existing BFT protocols favor strong safety guarantees (consistency) over liveness (availability).

In this talk, I will present a novel BFT state machine replication protocol called Zeno that trades consistency for higher availability. In particular, Zeno replaces strong consistency (linearizability) with a weaker guarantee (eventual consistency): clients can temporarily miss each other's updates but when the network is stable the states from the individual partitions are merged by having the replicas agree on a total order for all requests. Evaluation of a prototype of Zeno shows that Zeno provides better availability than traditional BFT protocols.

Bio:
Atul Singh is a Researcher at the NEC Labs, Princeton. He received his PhD in Computer Science from Rice University and spent last two years visiting the Max Planck Institute for Software Systems (MPI-SWS), Saarbrucken, Germany. Before that, he spent two years visiting Intel Research Berkeley, working with the P2 group. His interests lie in the area of dependable distributed systems, overlay networks, declarative networking, and is currently focusing on exciting challenges emerging in the cloud computing arena.

Classification of Multivariate Time Series via Temporal Abstraction

Date and Time
Thursday, November 12, 2009 - 11:00am to 12:00pm
Location
Computer Science 402
Type
Talk
Speaker
Robert Moskovitch, from the Medical Informatics Research Center, Ben-Gurion University, Israel
Analysis of multivariate time stamped data, for purposes such as Temporal Knowledge Discovery, Classification and Clustering, presents many challenges. Time stamped data can be sampled in a fixed frequency, commonly when measured by electronic sensors, but also in a non fixed frequency, often when made manually. Additionally, raw temporal data can represent periods of a continuous or nominal value represented by time intervals. Temporal bstraction, in which time point series are abstracted into meaningful time intervals, is used to bring all the temporal variables to the same representation. In this talk I will present KarmaLego, a fast time intervals mining method for the discovery of non-ambiguous Time Intervals Related Patterns (TIRP) represented by Allen's temporal relations. Then I will present several uses of the TIRPs. In this talk I will focus on the use of classification of multivariate time series, in which TIRPs are used as classification features. The entire process and several computational abstraction methods for the task for classification will be presented. Finally an evaluation on real datasets from the medical domain will be presented, in addition to meaning examples of discovered temporal knowledge.

The Economics of Virtualization

Date and Time
Thursday, November 19, 2009 - 12:30pm to 1:30pm
Location
Sherrerd Hall 101
Type
Talk
Speaker
Paul Laskowski
Host
Edward Felten, CITP
The difficulty of making changes to the internet architecture has spawned widespread interest in large-scale, "virtualized" testbeds as a place to deploy new services. Despite the excitement, uncertainty surrounds the question of how technologies can bridge the gap from testbed to global availability. It is recognized that no amount of validation will spur today's ISPs to make architectural changes, so if new services are to reach a widespread audience, the testbed itself must provide that reach. This suggests two questions: First, would today's network providers (or a new set of providers) ever support a virtualized architecture on a global scale? Second, even if they did, would such a network, spanning a great many domains, support the adoption of new services or upgrades to the infrastructure?

In this talk, I will argue that the answers to these questions depend critically on how money flows to network and service providers. A novel economic theory, rooted in the classic model of Cournot competition, allows us to compare market types with regard to service innovation and network upgrade. According to this analysis, there is a danger that a virtualized testbed inherits the market structure prevalent in the internet architecture, causing investment levels to remain poor. On the other hand, alternate market designs can dramatically improve incentives to invest in services, and even in network upgrades, but may encounter resistance from network providers who are likely to see reduced profits. I will discuss how these alternate market types may be implemented.

RouteBricks: Exploiting Parallelism to Scale Software Routers

Date and Time
Tuesday, October 20, 2009 - 12:30pm to 1:30pm
Location
Computer Science 402
Type
Talk
Speaker
Katerina Argyraki, from EPFL
I will discuss the problem of building fast, programmable routers. I will present RouteBricks, a high-end router architecture that consists entirely of commodity servers. RouteBricks achieves high performance by parallelizing router functionality both across multiple servers and across multiple cores within a single server; it is fully programmable using the familiar Click/Linux environment. I will also present RB4, our 4-server prototype that routes at 35Gbps; this routing capacity can be linearly scaled through the use of additional servers.
Follow us: Facebook Twitter Linkedin