Quick links

Talk

From EDA to NDA: Treating Networks like Chips and Programs

Date and Time
Tuesday, June 9, 2015 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Speaker
George Varghese, from Microsoft Research
Host
Jennifer Rexford

Surveys reveal that network outages are prevalent, and that many outages take hours to resolve, resulting in significant lost revenue. Many bugs are caused by errors in configuration files which are programmed using arcane, low-level languages, akin to machine code.  Further, mistakes are often hunted down using rudimentary tools such as Ping and Traceroute.

Taking our cue from other fields such as hardware design, we suggest fresh approaches.   Our first attempt was a geometric model of network forwarding called Header Space together with parsers that convert router configurations to header space representations.  Header Space ideas were used to build a static checker (Hassel) that can identify reachability bugs and a dynamic checker (ATPG) that can identify performance faults.  Unlike classical model checkers, Hassel does not have a notion of a specification language or a modeling language, which makes it difficult to write higher level specifications or deal with changing router behaviors.  Our attempt to remedy this was to teach an old dog (Datalog) new tricks to create what we call Network Optimized Datalog (NoD) followed by extensions to quantitative  and automatic verification.    

These results suggest that concepts from Electronic Design Automation (EDA) and program verification can be leveraged to create what might be termed Network Design Automation (NDA).   What might the equivalent of Layout Versus Schematic tools or Specification Mining be?  What are the differences in the domain that can be exploited? The second part of this talk will explore this vision, touching upon modular network semantics, language design, performance invariants, and interactive network debuggers.  This is joint work with collaborators at Berkeley, MSR, and Stanford.

George Varghese received his Ph.D. in 1992 from MIT.  From 1993-1999, he was a professor at Washington University, and at UCSD from 1999 to 2013. He was the Distinguished Visitor in the computer science department at Stanford University from 2010-2011.  He joined Microsoft Research in 2012.  

His book "Network Algorithmics" was published in December 2004 by Morgan-Kaufman. In May 2004, he co-founded NetSift, which was acquired by Cisco Systems in 2005. With colleagues, he has won best paper awards at SIGCOMM (2014), ANCS (2013), OSDI (2008), PODC (1996), and the IETF Applied Networking Prize (2013).  He won the Kobayashi Award and the SIGCOMM Lifetime Award, both in 2014, and the IIT Bombay Distinguished Alumni Award in 2015.

Universal laws and architectures: theory and lessons from brains, nets, hearts, bugs, grids, flows, and zombies

Date and Time
Friday, May 15, 2015 - 1:30pm to 2:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Host
Jennifer Rexford

This talk aims to accessibly introduce a new theory of network architecture relevant to biology, medicine and technology (particularly SDN/NFV and cyberphysical systems), initially minimizing math details. Key ideas are motivated by familiar examples from neuroscience, including live demos using audience brains, and further illustrated with examples from technology and biology. The status of the necessary math will be sketched in as much detail as time permits. Background material is in online videos (accessible from website above) and a recent blog post: rigorandrelevance.wordpress.com/author/doyleatcaltech.  My research is aimed at developing a more “unified” theory for complex networks motivated by and drawing lessons from neuroscience[4], cell biology[3], medical physiology[9], technology (internet, smartgrid, sustainable infrastructure)[1][8], and multiscale physics [2],[5],[6]. This theory involves several elements: hard limits, tradeoffs, and constraints on achievable robust, efficient performance ( “laws”), the organizing principles that succeed or fail in achieving them (“architectures” and protocols), the resulting high variability data and “robust yet fragile” behavior observed in real systems and case studies (behavior, data, statistics), the processes by which systems adapt and evolve (variation, selection, design), and their unavoidable fragilities (hijacking, parasites, predation, zombies).  We will leverage a series of case studies with live demos from neuroscience, particularly vision and sensorimotor control, plus some hopefully familiar and simple insights from medicine, cell biology and modern computer and networking technology.  Zombies emerge throughout as a ubiquitous, strangely popular, and annoying system fragility, particularly in the form of zombie science.  In addition to the above mentioned blog and videos, papers [1] and [4] (and references therein) are the most accessible and broad introduction while the other papers give more domain specific details.  For math details the best place to start is Nikolai Matni’s website (cds.caltech.edu/~nmatni/).

Selected recent references:
[1]        Alderson DL, Doyle JC (2010) Contrasting views of complexity and their implications for network-centric infrastructures. IEEE Trans Systems Man Cybernetics—Part A: Syst Humans 40:839-852.
[2]        Sandberg H, Delvenne JC, Doyle JC. On Lossless Approximations, the Fluctuation-Dissipation Theorem, and Limitations of Measurements, IEEE Trans Auto Control, Feb 2011
[3]        Chandra F, Buzi G, Doyle JC (2011) Glycolytic oscillations and limits on robust efficiency. Science, Vol 333, pp 187-192.
[4]        Doyle JC, Csete ME(2011) Architecture, Constraints, and Behavior, P Natl Acad Sci USA, vol. 108, Sup 3 15624-15630
[5]        Gayme DF, McKeon BJ, Bamieh B, Papachristodoulou P, Doyle JC (2011) Amplification and Nonlinear Mechanisms in Plane Couette Flow, Physics of Fluids, V23, Issue 6, 065108
[6]        Page, M. T., D. Alderson, and J. Doyle (2011), The magnitude distribution of earthquakes near Southern California faults, J. Geophys. Res., 116, B12309, doi:10.1029/2010JB007933.
[7]        Namas R, Zamora R, An, G, Doyle, J et al, (2012) Sepsis: Something old, something new, and a systems view, Journal Of Critical Care  Volume: 27   Issue: 3
[8]        Chen, L; Ho, T; Chiang, M, Low S; Doyle J,(2012) Congestion Control for Multicast Flows With Network Coding, IEEE Trans On Information Theory  Volume: 58   Issue: 9   Pages: 5908-5921
[9]        Li, Cruz, Chien, Sojoudi, Recht, Stone, Csete, Bahmiller, Doyle (2014)   Robust efficiency

John Doyle is the Jean-Lou Chameau Professor of Control and Dynamical Systems, Electrical Engineer, and BioEngineering at Caltech (BS&MS EE, MIT (1977), PhD, Math, UC Berkeley (1984)). Research is on mathematical foundations for complex networks with applications in biology, technology, medicine, ecology, and neuroscience. Paper prizes include IEEE Baker and Automatic Control Transactions (twice), ACM Sigcomm, AACC American Control Conference. Individual awards include IEEE Power Hickernell, AACC Eckman, UCB Friedman, IEEE Centennial Outstanding Young Engineer, and IEEE Control Systems Field Award.  Best known for fabulous friends, colleagues, and students, plus national and world records and championships in various sports.  Extremely fragile.

A fast method of verifying network routing with back-trace header space analysis

Date and Time
Monday, May 18, 2015 - 10:00am to 11:00am
Location
Computer Science 402
Type
Talk
Speaker
Toshio Tonouchi, from NEC
Host
Jennifer Rexford

It is a tough job for operators to make perfectly accurate configuration of many network elements in large networks. Erroneous configurations may cause critical incidents in network, on which many ICT systems are running. It may also result in a security hole as well as system incidents. There has been much work on preventing erroneous configurations, but this has taken a lot of time to verify routing with large networks.  We propose a new method of verifying network routing. It only focuses on verifying isolation and reachability, but it can verify these properties with O(R^2), where R is the number of flow entries, while the performance of an existing method of verification is O(R^3). We also provide a proof of the correctness of our method.

Software-Defined Networking at the National Security Agency

Date and Time
Monday, April 13, 2015 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Speaker
Byan Larish, from National Security Agency

This event is co-sponsored by CITP and the Department of Computer Science.

The IT department at the NSA is similar to many other large organizations; budgets and manpower are declining, while at the same time demands for higher reliability and additional services are increasing. Because of these factors, these IT departments must change how they do business when building their IT infrastructure. Open networking and software defined networking (SDN) are two promising technology trends that the NSA is applying to resolve these challenges in different areas of its network architecture. This talk will detail that element’s open networking and SDN initiatives in three areas: an OpenStack data center; a data center that hosts a storage cloud; and the campus area networks at branch offices. The talk will describe the motivation for each initiative, the architectures and solutions considered, and early-on lessons learned from development and deployment.

Bryan Larish is the Technical Director for Enterprise Connectivity & Specialized IT Services at the National Security Agency (NSA), and he is responsible for setting the technical direction of the development and operation of NSA’s global network infrastructure.

Prior to joining NSA, Bryan worked in the Chief Engineer’s office at the U.S. Navy’s Space and Naval Warfare Systems Command (SPAWAR). In that role, he was responsible for implementing engineering techniques used to manage, architect, and plan the U.S. Navy’s communications/IT systems portfolio. Bryan’s other experience includes Technical Director for Navy engineering policy and various engineering roles at SPAWAR.

Bryan holds a Ph.D. and M.S. in electrical and computer engineering from the Georgia Institute of Technology and a B.S.E. in electrical engineering from Arizona State University.
 

CDN-on-Demand: Fighting DoS with Untrusted Clouds

Date and Time
Tuesday, April 7, 2015 - 11:00am to 12:00pm
Location
Friend Center 108
Type
Talk
Speaker

We present the design and implementation of CDN-on-Demand, a system that provides low-cost protection for websites against DDoS attacks, without impacting on website operation and expenses under normal operating conditions. CDN-on-Demand is a software package rather than a service, it migrates websites to a scalable infrastructure in case of high-load and serves clients from proxies that it automatically deploys on multiple low cost cloud services. In contrast to current CDN services, CDN-on-Demand protects against rogue service providers and compromised proxies by introducing an object security mechanism; this eliminates the need to trust the host with private keys or certificates. Furthermore, CDN-on-Demand protects the website against economic and degradation of service attacks that exploit the automatic scaling mechanism; we show that popular services are vulnerable to such attacks. We provide an open-source implementation of CDN-on-Demand, which we use to evaluate each component as well as the integrated CDN-on-Demand system.

Joint work with Amir Herzberg and Michael Sudkovich
 

Networks of Networks of Quantum Repeaters

Date and Time
Friday, March 6, 2015 - 1:30pm to 2:30pm
Location
Computer Science 401
Type
Talk
Speaker
Rodney Van Meter, from Keio University
Host
Margaret Martonosi

Experimental progress toward quantum repeaters is moving at a tremendous rate, and theorists have proposed half a dozen approaches to managing errors to create high-fidelity entanglement along a chain of repeaters.  The next frontier is extending from one-dimensional chains to complex topologies.  Problems in network engineering include robust protocol design and resource management.  I will give an overview of these issues, then discuss the even more daunting challenge of creating networks of networks -- a true quantum Internet -- capable of coupling networks that are heterogeneous in both physical technology and error management scheme.

Rodney Van Meter received a B.S. in engineering and applied science from the California Institute of Technology in 1986, an M.S. in computer engineering from the University of Southern California in 1991, and a Ph.D. in computer science from Keio University in 2006. His current research centers on quantum computer architecture and quantum networking.  Other research interests include storage systems, networking, and post-Moore's Law computer architecture.  He is now an Associate Professor of Environment and Information Studies at Keio University's Shonan Fujisawa Campus.  Dr. Van Meter is a member of AAAS, ACM and IEEE.
 

Resource Virtualization for Software-defined Networks

Date and Time
Wednesday, November 12, 2014 - 12:00pm to 1:30pm
Location
Computer Science 402
Type
Talk

Software defined networking centralizes control plane functionality, separating it from the data plane which is responsible for packet forwarding. Many management tasks such as finding heavy hitters for multi-path routing may run using SDN in a network with limited resources. However, by abstracting them from resources at individual switches, a resource manager at the controller can optimize their resource usage. As management tasks often have a measurement-control loop, my projects, DREAM and vCRIB, work on measurement and control tasks, respectively: First, Dream ensures a minimum user-specified level of accuracy for tasks instead of allocating a fixed amount of resources to each task. Therefore, it dynamically allocates resources across tasks in reaction to traffic dynamics and task dynamics, which allows resource multiplexing. DREAM is 2x better at the tail of minimum accuracy satisfaction comparing to current practice even in cases with moderate load. Next, vCRIB automatically distributes control rules on all switches in the network giving the abstraction of a centralized rule repository with resources equal to the combined resources of all switches. vCRIB can find feasible rule placement with less than 10% traffic overhead in cases where traffic-optimal rule placement is not feasible with respect to CPU and memory constraints.

Masoud Moshref is a 5th year PhD candidate in University of Southern California. He works on resource virtualization in Software-Defined Networks in Networked Systems Lab under supervision of Ramesh Govindan and Minlan Yu. He got MSc and BSc in Information Technology Engineering from Sharif University of Technology in Iran.

Reading code as if it mattered

Date and Time
Wednesday, November 5, 2014 - 12:15pm to 1:30pm
Location
Computer Science 302
Type
Talk
Speaker
Yaron Minsky, from Jane Street
Host
David Walker

Code review is a fundamental part of developing high quality software.  Pretty much every software organization that cares about good code has some kind of code review system in place.

But automating code review, particularly for a large and complex codebase that has many active contributers, is surprisingly challenging.  This is especially so for a correctness-critical codebase where it's important that review be done completely, even in awkward corner-cases.

This talk will cover the design of Iron, a code review and release management tool that was developed at Jane Street to address these problems.  We'll show Iron models the process of code review, and uses that model to effectively handle complex cases like reading through a conflicted merge.  In addition, we'll describe how Iron's integrated release management and its system of hierarchical features is used to allow multiple different release workflows to co-exist harmoniously on the same codebase.

Yaron Minsky heads the Technology group at Jane Street, a proprietary trading firm that is the largest industrial user of OCaml. He was responsible for introducing OCaml to the company and for managing the company's transition to using OCaml for all of its core infrastructure. Today, billions of dollars worth of securities transactions flow each day through those systems. Yaron obtained his PhD in Computer Science from Cornell University, where he studied distributed systems. Yaron has lectured, blogged and written about OCaml for years, with articles published in Communications of the ACM and the Journal of Functional Programming. He chairs the steering committee of the Commercial Users of Functional Programming, and is a member of the steering committee for the International Conference on Functional Programming.

Jane Street Tech Talk

Date and Time
Thursday, November 6, 2014 - 4:30pm to 6:30pm
Location
Computer Science Tea Room
Type
Talk

Jane Street
Complicated systems require expressive configuration languages. But language design is hard: It's no surprise that many applications have either limited configurability or an unwieldy configuration format with
complex semantics.

At Jane Street, we have seen this problem enough times that we decided to start writing our configs the same way that we write our code, in OCaml. In this talk, we'll discuss our experiences using ocaml-plugin[1], a library we developed to embed OCaml within an application, providing a configuration language that is both expressive and familiar.

We'll also discuss some of the potential problems of using a Turing-complete language for configuration, as well as how to capture some of the benefits of a simpler and more constrained configuration system without giving up the power of a programming language.
 

[1] https://github.com/janestreet/ocaml_plugin

Deep Packet Inspection as a Service

Date and Time
Wednesday, November 12, 2014 - 10:00am to 11:00am
Location
Computer Science 402
Type
Talk
Speaker
Yaron Koral, from Princeton University
Host
Jennifer Rexford

Middleboxes play a major role in contemporary networks, as forwarding packets is often not enough to meet operator demands, and other functionalities (such as security, QoS/QoE provisioning, and load balancing) are required. Traffic is usually routed through a sequence of such middleboxes, which either reside across the network or in a single, consolidated location. Although middleboxes provide a vast range of different capabilities, there are components that are shared among many of them.  A task common to almost all middleboxes that deal with L7 protocols is Deep Packet Inspection (DPI). Today, traffic is inspected from scratch by all the middleboxes on its route. In this paper, we propose to treat DPI as a service to the middleboxes, implying that traffic should be scanned only once, but against the data of all middleboxes that use the service. The DPI service then passes the scan results to the appropriate middleboxes. Having DPI as a service has significant advantages in performance, scalability, robustness, and as a catalyst for innovation in the middlebox domain. Moreover, technologies and solutions for current Software Defined Networks (SDN) (e.g., SIMPLE [41]) make it feasible to implement such a service and route traffic to and from its instances.

This is joint work with Anat Bremler-Barr, Yotam Harchol, and David Hay, and will appear at CoNEXT in December 2014.
Yaron received his PhD at Tel Aviv University and is a new postdoc at Princeton.

Follow us: Facebook Twitter Linkedin