Quick links

Talk

Global Measurements of Internet Censorship

Date and Time
Friday, April 14, 2017 - 12:30pm to 1:30pm
Location
127 Corwin Hall
Type
Talk
Host
The Program for Quantitative and Analytical Political Science, Department of Politics

Photo of Professor Nick Feamster

Photo by David Kelly Crow

Internet users in many countries around the world are subject to various forms of censorship and information control. Despite its widespread nature, however, measuring Internet censorship on a global scale has remained an elusive goal. Internet censorship is known to vary across time and within regions (and Internet Service Providers) within a country. To capture these complex dynamics, Internet censorship measurement must be both continuous and distributed across a large number of vantage points.  To date, gathering such information has required recruiting volunteers to perform measurements from within countries of interest; this approach does not permit collection of continuous measurements, and it also does not permit collection from a large number of measurement locations; it may also put the people performing the measurements at risk. Over the past four years, we have developed a collection of measurement techniques to surmount the limitations of these conventional approaches. In this talk, I will describe three such techniques: (1) Encore, a tool that performs cross-origin requests to measure Web filtering; (2) Augur, a tool that exploits side-channel information in the Internet Protocol (IP) to measure filtering using network-level access control lists; and (3) a tool to measure DNS filtering using queries through open DNS resolvers. These three tools allow us—for the first time ever—to characterize Internet censorship continuously, from hundreds of countries around the world, at different layers of the network protocol stack. each of these techniques involves both technical and ethical challenges. I will describe some of the challenges that we faced in designing and implementing these tools, how we tackled these challenges, our experiences with measurements to date, and our plans for the future. Long term, our goal is to collaborate with social scientists to bring data to bear on a wide variety of questions concerning Internet censorship and information control; I will conclude with an appeal to cross-disciplinary work in this area and some ideas for how computer scientists and social scientists might work together on some of these pressing questions going forward.

This research is in collaboration with Sam Burnett, Roya Ensafi, Paul Pearce, Ben Jones, Frank Li, and Vern Paxson.

Interrupts in OS code: let’s reason about them. Yes, this means concurrency.

Date and Time
Monday, May 16, 2016 - 11:00am to 12:00pm
Location
Computer Science 402
Type
Talk
Speaker

Existing modelled and verified operating systems (OS’s) typically run on uniprocessor platforms and run with interrupts mostly disabled. This makes formal reasoning more tractable: execution is mostly sequential. The eChronos OS is a real-time OS used in tightly constrained devices, running on (uniprocessor) embedded micro-controllers. It is used in the DARPA-funded HACMS program, where it runs the flight control software of a high-assurance quadcopter. To provide low latency, the eChronos OS runs with interrupts enabled and provides a preemptive scheduler. In terms of verification, this means that concurrency reasoning is required, which significantly increases the complexity: application and OS instructions may be interleaved with interrupt handler instructions.

In our work we explicitly model the effect of interrupts and their handling by the hardware and OS. We provide a general formal model of the interleaving between OS code, application code and interrupt handlers. We then instantiate this model to formalise the scheduling behavior of the eChronos OS, and prove the main scheduler property: the running task is always the highest-priority runnable task.

June Andronick is a Senior Researcher at Data61|CSIRO (formerly NICTA). Her research focuses on increasing the reliability of critical software systems, by mathematically proving that the code behaves as expected and satisfies security and safety requirements.  She contributed to the seL4 correctness proof and now focuses on concurrency reasoning for OS code. She leads the concurrency software verification research in Data61, and is deputy leader of the Trustworthy Systems group. She was recognised in 2011 by MIT's Technology Review as one of the world's top young innovators (TR35). She holds a PhD in Computer Science from the University of Paris-Sud, France.

Towards Socially-aware Artificial Intelligence: Human Behavior Understanding at Scale

Date and Time
Wednesday, April 20, 2016 - 1:30pm to 2:30pm
Location
Computer Science 402
Type
Talk

Over the past sixty years, Intelligent Machines (IM) have made great progress in playing games, tagging images in isolation, and recently making decisions for self-driving vehicles. Despite these advancements, they are still far from making decisions in social scenes and assisting humans in public spaces such as terminals, malls, campuses, or any crowded urban environment. To overcome these limitations, we need to empower machines with social intelligence, i.e., the ability to get along well with others and facilitate mutual cooperation. This is crucial to design smart spaces that adapt to the behavior of humans for efficiency, or develop autonomous machines that assist in crowded public spaces (e.g., delivery robots, or self-navigating segways).

In this talk, I will present my work towards socially-aware machines that can understand human social dynamics and learn to forecast them. First, I will highlight the machine vision techniques behind understanding the behavior of more than 100 million individuals captured by multi-modal cameras in urban spaces. I will show how to use sparsity promoting priors to extract meaningful information about human behavior from an overwhelming volume of high dimensional and high entropy data. Second, I will introduce a new deep learning method to forecast human socialbehavior. The causality behind human behavior is an interplay between both observable and non-observable cues (e.g., intentions). For instance, when humans walk into crowded urban environments such as a busy train terminal, they obey a large number of (unwritten) common sense rules and comply with social conventions. They typically avoid crossing groups and keep a personal distance to their surrounding. I will present detailed insights on how to learn these interactions from millions of trajectories. I will describe a new recurrent neural network that can jointly reason on correlated sequences and simulate human trajectories in crowded scenes. It opens new avenues of research in learning the causalities behind the world we observe. I will conclude my talk by mentioning some ongoing work in applying these techniques to social robots, and the first generation of smart hospitals. 

Alexandre Alahi is currently a postdoctoral fellow at Stanford University and received his PhD from EPFL. His research interests span visual information processing, computer vision, machine learning, robotics, and are focused around understanding and forecasting human social behaviors. In particular, he is interested in sparse approximation, deep learning, big visual data processing, real-time machine vision, and multi-modal reasoning. He was awarded the Swiss NSF early and advanced researcher grants. He won the CVPR 2012 Open Source Award for his work on Retina-inspired image descriptor, and the ICDSC 2009 Challenge Prize for his sparsity driven algorithm to track sport players. His PhD was nominated for the EPFL PhD prize. His research has been covered by the Wall Street journal, and PBS TV channel in the US, as well as European newspapers, Swiss TV news, and Euronews TV channel in Europe. He has also co-founded the startup Visiosafe, transferring his research on understanding human behavior into an industrial product. He was selected as the Top 20 Swiss Venture leaders in 2010 and won several startup competitions.

The New ABCs of Research: Achieving Breakthrough Collaborations

Date and Time
Thursday, April 21, 2016 - 8:00pm to 9:00pm
Location
Computer Science Large Auditorium (Room 104)
Type
Talk
All Princeton ACM / IEEE-CS meetings are open to the public. Students and their parents are welcome.

How should we organize research programs in computer science and technology? Research helps us produce innovative new products (the next iPhone), and research also delivers “foundational theories” (new algorithms and supporting technologies).

Solving the immense problems of the 21st century will require ambitious research teams that are skilled at producing practical solutions and foundational theories simultaneously – that is the ABC Principle: Applied & Basic Combined. Then these research teams can deliver high-impact outcomes by applying the SED Principle: Blend Science, Engineering and Design Thinking, which encourages use of the methods from all three disciplines. These guiding principles (ABC & SED) are meant to replace Vannevar Bush’s flawed linear model from 1945 that has misled researchers for 70+ years. These new guiding principles will enable students, faculty, business leaders, and government policy makers to accelerate discovery and innovation. The examples in the talk will emphasize how these guiding principles can be applied to reinvigorate computing research. 

Ben Shneiderman is a Distinguished University Professor in the Department of Computer Science at the University of Maryland. He is also the Founding Director (1983-2000) of the Human-Computer Interaction Laboratory (http://www.cs.umd.edu/hcil/), and a Member of the University of Maryland Institute for Advanced Computer Studies (UMIACS). He is a Fellow of the AAAS, ACM, IEEE, and NAI, and a Member of the National Academy of Engineering, in recognition of his pioneering contributions to human-computer interaction and information visualization. His contributions include the direct manipulation concept, clickable highlighted web-links, touchscreen keyboards, dynamic query sliders for Spotfire, development of treemaps, novel network visualizations for NodeXL, and temporal event sequence analysis for electronic health records.

Ben is the co-author with Catherine Plaisant of Designing the User Interface: Strategies for Effective Human-Computer Interaction (6th ed., 2016). His book Leonardo’s Laptop (MIT Press) won the IEEE book award for Distinguished Literary Contribution. Shneiderman’s latest book is The New ABCs of Research: Achieving Breakthrough Collaborations (Oxford, February 2016).

 

Abstractions for Programming the Data Plane at Line Rate

Date and Time
Tuesday, December 1, 2015 - 3:00pm to 4:00pm
Location
Sherrerd Hall 306
Type
Talk
Speaker

The evolution of network switches has been driven primarily by performance. Recently, thanks in part to the emergence of large datacenter networks, the need for better control over network operations, and the desire for new features, programmability of switches has become as important as performance. In response, researchers and practitioners have developed reconfigurable switching chips that are performance-competitive with line-rate fixed function switching chips. These chips provide some programmability through restricted hardware primitives that can be configured with software directives.

This talk will focus on abstractions for programming such chips. The first abstraction, packet transactions, lets programmers express packet processing in an imperative language under the illusion that the switch processes exactly one packet at a time. A compiler then translates this programmer view into a pipelined implementation that processes multiple packets concurrently. The second abstraction, a push-in first-out queue allows programmers to program new scheduling algorithms using a priority queue coupled with a program to compute each packet's priority in the priority queue. Together, these two abstractions allow us to express several packet-processing functions at line rate including in-network congestion control, active queue management, data-plane load balancing, measurement, and packet scheduling.

This talk includes joint work with collaborators at MIT, Barefoot Networks, Cisco Systems, Microsoft Research, Stanford, and the University of Washington.

 

Anirudh Sivaraman is a graduate student in MIT’s Computer Science and Artificial Intelligence Laboratory. He is broadly interested in computer networking and his recent research work is in the area of programmable forwarding planes.

Behavioral Non-Portability in Scientific Numeric Computing

Date and Time
Friday, December 4, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Host
Aarti Gupta

The precise semantics of floating-point arithmetic programs depends on the execution platform, including the compiler and the target hardware. Platform dependencies are particularly pronounced for arithmetic-intensive scientific numeric programs and infringe on the highly desirable goal of software portability (which is in fact promised by heterogeneous computing frameworks like OpenCL): the same program run on the same inputs on different platforms can produce different results. So far so bad.

Serious doubts on the portability of numeric applications arise when these differences are behavioral, i.e. when they lead to changes in the control flow of a program. In this work I will present an algorithm that takes a numeric procedure and determines an input that is likely to lead to different decisions depending merely on how the arithmetic in the procedure is compiled. Our implementation of the algorithm requires minimal intervention by the user. I will illustrate its operation on examples characteristic of scientific numeric computing, where control flow divergence actually occurs across different execution platforms.

Time permitting, I will also sketch how to prove the /absence/ of inputs that may lead to control flow divergence, i.e. how to prove programs /stable/ (in one of the many senses of this word). This is ongoing work.

Thomas Wahl is an Assistant Professor at Northeastern University. This is joint work with Yijia Gu, Mahsa Bayati, and Miriam Leeser at Northeastern.

A Fast Compiler for NetKAT

Date and Time
Monday, November 30, 2015 - 1:30pm to 2:30pm
Location
Sherrerd Hall 306
Type
Talk

In the past few years, high-level programming languages have played a key role in several networking platforms, by providing abstractions that streamline application development. Unfortunately, current compilers can take tens of minutes to generate forwarding state for even relatively small networks. This forces programmers to either work around performance issues or revert to using lower-level APIs.

In this talk, we first present a new compiler for NetKAT (a network programming language) that is two orders of magnitude faster than existing systems. Our key insight is a new intermediate representation based on a variant of binary decision diagrams that can represent network programs compactly and supports fast, algebraic manipulation. We argue that our compiler scales to large networks using a diverse set of benchmarks.

In addition to speed, our new intermediate representation lets us build a powerful new abstraction for network programming. Existing languages provide constructs for programming individual switches, which forces programmers to specify whole-network behavior on a switch-by-switch basis. For the first time, we can compile programs that syntactically represent sets of end-to-end paths through the network.  To do so, our compiler automatically inserts stateful operations (e.g., VLAN tagging) to distinguish overlapping paths from each other.

Finally, we present a very general implementation of network virtualization that leverages our ability to compile end-to-end paths.  The key insight is to give packets two locations--physical and virtual--and synthesize a program that moves packets along physical paths to account for hops in the virtual network. We show that different synthesis strategies can be used to implement global requirements, such as shortest paths, load-balancing, and so on.

Arjun Guha is an assistant professor of Computer Science at UMass Amherst. He enjoys tackling problems in systems using the tools and principles of programming languages. Apart from network programming, he has worked on Web security and system configuration languages. He received a PhD in Computer Science from Brown University in 2012 and a BA in Computer Science from Grinnell College in 2006.

Types and abstractions for concurrent programming

Date and Time
Monday, November 16, 2015 - 4:30pm to 5:30pm
Location
Computer Science 302
Type
Talk
Speaker
Yaron Minsky, from Jane Street Capital

Programming Languages Seminar

Concurrent programming is the art of coordinating multiple processes into a single program, and is a pervasive part of practical programming.  It's also hard to do well, in part because good abstractions for concurrent programming are hard to come by.

This talk discusses Async, a concurrent programming library for OCaml, a statically typed functional language in the ML family. We'll discuss how Async compares to other approaches to concurrent programming, and how we leverage OCaml's type system to build high-quality abstractions for concurrency, without modification to the underlying language.

We'll also discuss how our approach to concurrent programming has   evolved over the last decade, and what aspects of the API have   turned out to be problematic.

On the Computational Complexity of some Consistency Properties in SDNs

Date and Time
Tuesday, October 27, 2015 - 4:30pm to 5:30pm
Location
Computer Science 302
Type
Talk
While Software Defined Networks are controlled centrally by one (logical) controller, the dissemination of updates in the network itself is an inherently asynchronous distributed process. Even though eventual consistency is easy to guarantee, many useful network properties might be violated during the update process. 
In this talk, we are going to look at the computational complexity of guaranteeing consistent updates for loop freedom and bandwidth limits. The best update schedules for loop freedom are hard to approximate, and the migration of unsplittable flows without congestion is not easy either. A different picture unfolds for splittable flows as used in, e.g., SWAN: It is possible to decide quickly if a consistent migration is possible, but the migration time can be unbounded. To this end, we show how any consistent migration can be performed in only a linear number of steps if restricted to one (logical) destination.
 
Klaus-Tycho Foerster is a PhD candidate in the Computer Engineering and Networks Laboratory at ETH Zurich, advised by Roger Wattenhofer. Prior to joining ETH as a research assistant, he earned his master degrees in mathematics and computer science, both at the Braunschweig University of Technology. Klaus' research focus lies in distributed computing and graph algorithms, and since recently, Software Defined Networking: Funded by Microsoft Research, he is studying the complexity of consistent updates in SDNs.

Proving that programs eventually do something good

Date and Time
Tuesday, October 27, 2015 - 3:30pm to 4:30pm
Location
Computer Science 402
Type
Talk

In this talk I will discuss research advances that led to practical tools for automatically proving program termination and related properties, e.g. liveness. Practical applications include automatically proving device driver correctness, and pharmaceutical research. 

Byron Cook is Professor of Computer Science at University College London.  Byron is also a Senior Principal at Amazon.  See http://www0.cs.ucl.ac.uk/staff/b.cook/ for more information.

Follow us: Facebook Twitter Linkedin