Quick links

Talk

The Computational Challenges of Connectomics

Date and Time
Friday, March 16, 2012 - 12:00pm to 1:00pm
Location
Green Hall 0-S-6
Type
Talk
Speaker
According to a doctrine known as connectionism, brain function and dysfunction depend primarily on patterns of connectivity between neurons. Connectionism has been explored theoretically with mathematical models of neural networks since the 1940s. It has proved difficult to test these models through activity measurements alone. For conclusive empirical tests, information about neural connectivity is also necessary, and could be provided by new imaging methods based on serial electron microscopy. The bottleneck in using these new methods is now shifting to the data analysis problem of extracting neural connectivity from the images. New multibeam scanning electron microscopes will soon generate a petabyte of image data from a cubic millimeter of brain tissue every two weeks. From such images, it should be possible to map every connection between neurons in the volume---in principle. Unfortunately, it could take up to a million years for a single person to carry out this feat manually. Clearly, our capacity to acquire "big data" from the brain has far outpaced our ability to analyze it. My lab has been developing computational technologies to deal with this data deluge. These include image analysis by artificial intelligence (AI) based on machine learning, and methods of "crowdsourcing" image analysis to human intelligence. Our technologies have been implemented in eyewire.org, an online community that mobilizes laypersons to help map neural connections of the retina by interacting with AI.

Using Static Analysis to Diagnose Misconfigured Open Source Systems Software

Date and Time
Wednesday, March 7, 2012 - 1:00pm to 2:00pm
Location
Computer Science 302
Type
Talk
Host
Michael Freedman
Ten years ago, few software developers worked on distributed systems. Today, developers often run code on clusters, relying on large open-source software stacks to manage resources. These systems are challenging to configure and debug. Fortunately, developments in program analysis have given us new tools for managing the complexity of modern software. This talk will show how static analysis can help users configure their systems. I present a technique that builds an explicit table mapping a program's possible error messages to the options that might cause them. As a result, users can get immediate feedback on how to resolve configuration errors.

Ariel Rabkin is a fifth (and final)-year PhD candidate in the RAD Lab at UC Berkeley, working with Randy Katz. He is formerly from Cornell University, AB 2006, MEng 2007). He is interested in software quality and software intelligibility. He expects to graduate in May 2012. His dissertation is about applying program analysis to system management, including automatically describing program configuration options and diagnosing configuration errors. He is a contributor to several open source projects, including Hadoop, the Chukwa log collection framework, and the JChord program analysis toolset.

Continuous Distributed Counting for Non-monotonic Streams

Date and Time
Tuesday, February 7, 2012 - 11:00am to 12:00pm
Location
Computer Science 302
Type
Talk
Host
Jennifer Rexford
We consider the problem of tracking a sum of values in a distributed environment where the input values arrive from k distributed streams. It is required that an estimate of the sum is maintained by a coordinator node that is within a prescribed relative error tolerance, at any time with high probability. The goal is to achieve such objective with small total communication with the coordinator node, in order to minimize the use of network bandwidth that may be a scarce resource. This is a basic distributed computation problem that is of interest for various applications including data aggregation and iterative solving of various computational problems.

Specifically, we consider a setting where the input values are selected and assigned to sites by an adversary but the arrival order of the input items is random. We show a randomized protocol with a sub-linear total communication with respect to the number of updates and its matching lower bound. This result stands in between the previously known results that for monotonic partial sums (all value updates either positive or negative), the expected total communication is logarithmic in the number of updates, and that for general worst-case inputs, the total required communication is linear in the number of updates. Finally, we also discuss applications of the sum tracking module for tracking the second frequency moment and for solving a Bayesian linear regression.

Zhenmin Liu is a PhD candidate in the theory of computation group at Harvard, working with Michael Mitzenmacher. His research focuses on stochastic processes, optimization, and algorithm design, as well as their applications to distributed computation and big data analytics.

A Case for a Global Information Network

Date and Time
Wednesday, January 18, 2012 - 11:00am to 12:00pm
Location
Computer Science 302
Type
Talk
Speaker
Host
Jennifer Rexford
There is an ever increasing gap between how the “common man” would describe the Internet and how a network researcher would. This is not surprising, as the “packet with a label identifying the receiver” model seems to be an ill fitting abstraction for popular services provided by companies such as Google, Facebook, YouTube, or Twitter.

These services are primarily about information dissemination and a suitable network service abstraction should really do more than deliver individual packets with minimal latency. We argue that we should adopt an information-centric system model instead of the current service oriented one. Instead of dealing with information exchange between specific services on a case by case bases, we should instead consider a global information storage and dissemination network on which individual applications and services are built.

In this talk I will discuss our vision the future architecture and give a brief overview on our progress so far.

Yan Shvartzshnaider is a PhD Candidate in the school of Engineering and Information Technologies in the University of Sydney and a graduate researcher in National ICT Australia (NICTA), Networking Group in ATP lab. His research focus is on large scale networking systems, in particular, developing a fully distributed information-centric networking abstraction.

Secure Provenance in Distributed Systems

Date and Time
Monday, December 12, 2011 - 11:00am to 12:00pm
Location
Computer Science 402
Type
Talk
Host
Jennifer Rexford
Operators of distributed systems often find themselves needing to answer forensic questions, to perform a variety of managerial tasks including fault detection, system debugging, accountability enforcement, and attack analysis. In this talk, we present Secure Provenance, a novel approach that provides the fundamental functionality required for answering such forensic questions -- the capability to "explain'' the existence (or change) of a certain distributed system state at a given time in a potentially adversarial environment.

We show that it is both possible and practical to efficiently and scalably maintain and query provenance in a distributed fashion, where provenance maintenance and querying are modeled as recursive continuous queries over distributed relations. We then propose enhancements to the provenance model that allow operators to reliably query provenance information in adversarial environments. Our extensions incorporate tamper-evident properties which provide the guarantee that operators can eventually detect the presence of compromised nodes that lie or falsely implicate correct nodes. Finally, we present ongoing efforts that consider privacy protection of sensitive information in provenance maintenance and querying, and discuss our work in the context of our longer term vision towards provably secure distributed systems.

Bayesian Covariance Regression and Autoregression

Date and Time
Thursday, November 17, 2011 - 12:30pm to 1:30pm
Location
Computer Science 402
Type
Talk
Host
David Blei
Although there is a rich literature on methods for allowing the variance in a univariate regression model to vary with predictors, time and other factors, relatively little has been done in the multivariate case. A number of multivariate heteroscedastic time series models have been proposed within the econometrics literature, but are typically limited by lack of clear margins, computational intractability, and curse of dimensionality. In this talk, we first introduce and explore a new class of time series models for covariance matrices based on a constructive definition exploiting inverse Wishart distribution theory. The construction yields a stationary, first-order autoregressive (AR) process on the cone of positive semi-definite matrices.

We then turn our focus to more general predictor spaces and scaling to high-dimensional datasets. Our proposed Bayesian nonparametric covariance regression framework harnesses a latent factor model representation. In particular, the predictor-dependent factor loadings are characterized as a sparse combination of a collection of unknown dictionary functions (e.g, Gaussian process random functions). The induced predictor-dependent covariance is then a regularized quadratic function of these dictionary elements. Our proposed framework leads to a highly-flexible, but computationally tractable formulation with simple conjugate posterior updates that can readily handle missing data. Theoretical properties are discussed and the methods are illustrated through an application to the Google Flu Trends data and the task of word classification based on single-trial MEG data.

Joint work with David Dunson and Mike West.

PhD & Faculty Tech Talk: The HipHop Virtual Machine

Date and Time
Thursday, October 13, 2011 - 5:30pm to 6:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Speaker
Guilherme Ottoni, from Facebook
To enable fast software development, Facebook has widely used the PHP programming language. Although easy to learn and quick to develop with, PHP is known for incurring significant performance penalty. To address this, the HipHop compiler team at Facebook has been working on multiple fronts. In this talk, I’ll first give an overview of the tools we have built and currently deploy in our production environment. After that, I’ll present the HipHop Virtual Machine, which is a new effort we’re pursuing to improve the performance of PHP via just-in-time (JIT) compilation techniques.

Dr. Guilherme Ottoni (*08) is currently a Software Engineer at Facebook, where he’s a member of the HipHop Compiler team. Before joining Facebook, Dr. Ottoni was a Staff Research Scientist at Intel Labs, where he lead a number of projects in the areas of dynamic binary translation, automatic parallelization, and microprocessor simulation. In a prior life, Dr. Ottoni co-organized the Brazilian Olympiad in Informatics and coached the Brazilian teams at several International Olympiads in Informatics. Dr. Ottoni received his B.S.E. from FURG, M.S. from UNICAMP, and Ph.D. from Princeton. He has published more than 30 papers and has several issued and pending patents.

RSVP at facebook.com/princetonphdtechtalk

Efficient Content Distribution with Managed Swarms

Date and Time
Friday, October 7, 2011 - 11:00am to 12:00pm
Location
Computer Science 302
Type
Talk
Peer-assisted content distribution has become more prevalent in recent years to match the increase of high-quality media on the Internet. Peer-assisted systems promise to improve download performance and scalability by harnessing bandwidth at peers in addition to the resources of server-class hosts. However, existing systems are ill-suited for high-performance distribution: today's systems focus on distribution of only a single file, and are inefficient for distributing large content libraries.

In this talk I will introduce a new approach to content distribution that achieves high performance based on managed swarms. Managed swarms address download performance across multiple concurrent swarms by viewing content distribution as a global optimization problem. A logically centralized coordinator orchestrates downloads by measuring the behavior of swarms and calculating an efficient allocation of resources to achieve a global performance goal. I will discuss the role of the coordinator for managing swarms, then I will describe two new algorithms based on managed swarms that maximize system-wide throughput in realistic deployment scenarios. Extensive simulations and deployments show that managed swarms provide a scalable, efficient solution for distributing content libraries, and that they significantly outperform centralized distribution services and existing swarming protocols.

Ryan S. Peterson is completing his Ph.D. at Cornell University, where he has been working on content distribution under the advisement of Emin Gun Sirer. He has also worked on distributed systems for publish-subscribe, reliable storage, and content discovery. Prior to Cornell, Ryan earned a B.S.E. degree from Princeton University. Ryan is the 2009 recipient of the Google Fellowship in Distributed Systems, and he will be joining Google next month.

Connecting the Disconnected

Date and Time
Monday, July 11, 2011 - 11:00am to 12:00pm
Location
Computer Science 402
Type
Talk
Host
Jennifer Rexford
Mobility, in all of its forms, results in networks with varying degrees of connectivity. Providing communication services in such networks relies on solutions that treat these disconnections as an expected part of the system, instead of as an exception to be handled out of band. However, handling disconnectivity breaks most architectures for more stable and connected networks. Instead of a standard layered approach to communication services, communication in such intermittently connected networks depends on three key components: contacts, or knowing who we meet, context, knowing what is in our neighborhood, and content, knowing how to find and deliver data. In this talk, I will present some of our current research on support for communication in intermittently connected networks, vehicular networks and mobile social networks, as well as the target applications and deployment testbeds.

Robin Kravets is an Associate Professor at the University of Illinois at Urbana-Champaign and the head of the Mobius group at UIUC, which researches communication issues in all types of networks that are challenged by mobility, including wireless LANs, ad hoc networks, sensor network, delay and disruption tolerant networks, vehicular networks, mobile social networks and personal area networks. Her research focuses on solutions that enable effective power management, connectivity management, data transport, congestion management, location management, routing and security. For more information, visit http://mobius.cs.illinois.edu.

Datacenter Network Architecture @ Yahoo!: Past, present, and challenges for the future

Date and Time
Monday, May 2, 2011 - 1:00pm to 2:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Speaker
Igor Gashinsky, from Yahoo!
Host
Michael Freedman
As the number of people and applications on the internet grows exponentially, so do the complexities of serving vast amounts of information. From the progression to unlimited e-mail storage, 10+ MegaPixel photos and HD streaming video being stored and shared online, to semi-real-time page personalization and real time search applications, the storage and computing power requirements have grown by leaps and bounds over the past 15 years.

Ever wonder what the datacenter networks supporting these applications look like? This talk will provide a basic overview of large-scale datacenter network design principles, stepping through a history of how those designs evolved at Yahoo! over the years, the challenges that were faced, their solutions, as well as the challenges we anticipate in the (near) future.

Igor Gashinsky is a principal architect at Yahoo!, a global content provider, where he is involved in projects ranging from overall network design (including highly resilient switching and routing architecture, peering, MPLS, L4-7 load balancing), as well as scalable content delivery methodologies and DNS architecture. Prior to his 8.5 years with Yahoo!, Igor worked as a Senior Systems & Network Engineer for HotJobs.com, as well as consulted for a number of clients, working on network and systems architecture, network security, system clustering/HA, high-performance storage solutions, and general Unix system and network administration. Igor holds a BS in Computer Science from Stevens Institute of Technology.

Follow us: Facebook Twitter Linkedin