Quick links

Talk

Defending Against Internet Censorship and Control, from Firewalls to Filter Bubbles

Date and Time
Tuesday, November 26, 2013 - 3:00pm to 4:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Speaker
Host
Jennifer Rexford
The Internet's promise of open communication and transparency is threatened by both censorship (blocking communication outright) and manipulation (otherwise affecting the performance that a user experiences or the information that a user sees). Censorship is a pervasive threat, with more than 60 countries around the world censoring Internet communications in some form. Unfortunately, many conventional censorship circumvention tools are detectable and can be blocked; in some cases, the use of such software may even be incriminating. Thus, users may need not only to defeat censorship mechanisms but also to hide the fact that they are doing so in the first place.

In the first part of the talk, I will describe two systems that achieve this goal, Infranet and Collage. In addition to circumventing censorship firewalls, both Infranet and Collage provide users with the deniability that they are using the censorship circumvention system in the first place. In both systems, we achieve deniability by designing them so that the observable network traffic that they generate is statistically indistinguishable from the user's "normal" traffic patterns if the tools were not in use. Infranet achieves deniability by hiding a user's requests for Web content in other Web traffic that resembles a user's typical browsing pattern. Collage achieves deniability by hiding content in user-generated content sites and encoding a user's request for the content in a sequence of operations that a given user is likely to perform.

In recent years, however, circumvention alone is insufficient: sophisticated organizations can also control users by manipulating network traffic. Manipulation can take many forms, from degrading performance to such an extent that a user does not want to use the service, to using Internet communication to generate propaganda (e.g., via social media), to attacking personalization algorithms to affect the results that a user sees in response to a search query. In the second part of the talk, I will describe several manipulation attacks that we have studied and inference techniques that we have developed to detect them. I will conclude with our ongoing efforts to tackle open challenges in this area, such as the deceptively challenging problem of confirming the existence of various forms of censorship in the first place.

Nick Feamster is an associate professor in the College of Computing at Georgia Tech. He received his Ph.D. in Computer science from MIT in 2005, and his S.B. and M.Eng. degrees in Electrical Engineering and Computer Science from MIT in 2000 and 2001, respectively. His research focuses on many aspects of computer networking and networked systems, with a focus on network operations, network security, and censorship-resistant communication systems. In December 2008, he received the Presidential Early Career Award for Scientists and Engineers (PECASE) for his contributions to cybersecurity, notably spam filtering. His honors include the Technology Review 35 "Top Young Innovators Under 35" award, the ACM SIGCOMM Rising Star Award, a Sloan Research Fellowship, the NSF CAREER award, the IBM Faculty Fellowship, the IRTF Applied Networking Research Prize, and award papers at the SIGCOMM Internet Measurement Conference (measuring Web performance bottlenecks), SIGCOMM (network-level beh avior of spammers), the NSDI conference (fault detection in router configuration), Usenix Security (circumventing web censorship using Infranet), and Usenix Security (web cookie analysis).

Memory Abstractions for Parallel Programming

Date and Time
Monday, November 18, 2013 - 12:30pm to 1:30pm
Location
Computer Science 302
Type
Talk
A memory abstraction is an abstraction layer between the program execution and the memory that provides a different "view" of a memory location depending on the execution context in which the memory access is made. Properly designed memory abstractions help ease the task of parallel programming by mitigating the complexity of synchronization and/or admitting more efficient use of resources. In this talk, I will demonstrate this point using two case studies on two types of memory abstractions.

The first memory abstraction is the cactus stack memory abstraction in Cilk-M, a Cilk-based work stealing runtime system. Many multithreaded concurrency platforms that use a work-stealing runtime system incorporate a "cactus stack" to support multiple stack views for all the active children simultaneously. The use of cactus stacks, albeit essential, forces concurrency platforms to trade off between performance, memory consumption, and interoperability with serial code due to its incompatibility with linear stacks. We proposes a new strategy to build a cactus stack using thread-local memory mapping, which allows worker threads to have their respective linear views of the cactus stack. This cactus stack memory abstraction enables a concurrency platform that employs a work-stealing runtime system to satisfy all three criteria simultaneously.

The second memory abstraction is reducer hyperobjects (or reducers for short), a linguistic mechanism that helps avoid determinacy races in dynamic multithreaded programs. The Cilk-M runtime system supports reducers using the memory-mapping approach, which utilizes thread-local memory mapping and leverages the virtual-address translation provided by the underlying hardware to implement this memory abstraction. This memory mapping approach yields a close to 4x faster access time compared to the existing approach of implementing reducers.

I-Ting Angelina Lee is a postdoctoral associate in Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, working with Prof. Charles E. Leiserson. Her primary research interest is in the design and implementation of programming models, languages, and runtime systems to support multithreaded software. She received her Ph.D. from MIT, under the supervision of Prof. Charles E. Leiserson. In her Ph.D. thesis, she investigated several "memory abstractions," which help ease the task of parallel programming. Her prior work includes the "ownership-aware" transactional-memory methodology, the first transactional memory design that provides a structured programming style with provable safety guarantees for "open-nested" transactions, and JCilk, a variant of Java with multithreading provided by Cilk's fork-join primitives, that has exception-handling semantics which integrate synergistically with those primitives. She received her Bachelor of Science in Computer Science from UC San Diego, where she worked on the Simultaneous Multithreading Simulator for DEC Alpha under the supervision of Prof. Dean Tullsen.

Algebra: A Tool for a Unifying View of Routing Protocol Behavior

Date and Time
Tuesday, November 19, 2013 - 4:30pm to 5:30pm
Location
Computer Science 302
Type
Talk
Host
Jennifer Rexford
The Internet routing system is composed of millions of routers running one or more of several routing protocols---BGP, OPSF, IS-IS, RIP, EIGRP, DSR, AODV, OLSR, RPL---with the goal of guiding data-packets from sources to destinations. Beyond the specificity of every routing protocol, a number of questions are pertinent across wide classes of them, namely:

  • Does the routing protocol always terminate in a stable state devoid of forwarding loops and forwarding deflections?
  • Are the paths followed by data-packets optimal in some sense?
  • How much of the inherent resiliency of the network does the routing protocol exploit?
  • Does the routing protocol scale with the number of destinations?
Routing selects paths for the transport of data-packets indirectly, through attributes they possess. Routing protocols operate by continually interleaving two processes: that of computing attributes of paths from characteristics of its constituent links; and that of choosing best attributes. The algebraic approach studies how the entwining of these two processes determines the global behavior of a routing protocol, providing answers to the questions above.

In this talk, I will illustrate the unifying power of the algebraic approach by delving into the non-optimality of paths determined by EIGRP (Enhanced Interior Gateway Routing Protocol) and termination and resiliency issues raised by BGP (Border Gateway Protocol). In addition, I will survey recent applications of the algebraic approach, notably to the interconnection of routing instances and to route aggregation in the Internet.

João Luís Sobrinho received his Licenciatura and Ph.D. degrees in Electrical and Computer Engineering from the Technical University of Lisbon, Portugal, in 1990 and 1995, respectively. During 1995 and 1996, he was a Member of the Technical Staff at Bell Labs, Nieuwegein, The Netherlands; he further consulted for Bell Labs, Murray Hill, NJ, from 1997 to 1999. He is currently Associate Professor at the Faculty of Engineering of the Technical University of Lisbon and a Researcher at the Telecommunications Institute. Prof. Sobrinho is a member of the IEEE and of the ACM.

Khan Academy Tech Talk

Date and Time
Thursday, October 3, 2013 - 5:00pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Speaker
Host
Student ACM Club
John Resig, creator of the jQuery Javascript library and Dean of Computer Science at Khan Academy, will talk about the challenges and benefits of developing an educational website that scales to millions of learners every day.

Towards combining the advantages of SDN and distributed routing protocols

Date and Time
Tuesday, October 8, 2013 - 12:00pm to 1:30pm
Location
Computer Science 302
Type
Talk
Speaker
Stefano Vissicchio, from University of Louvain
Host
Laurent Vanbever
Software Defined Networking (SDN) promises to bring unparalleled flexibility, configuration simplification and vendor lock-in removal. On the flip side, the SDN logic centralization requires a global renovation of network equipments. Moreover, it raises issues, e.g., on network resiliency and reaction to failures, that have been naturally and effectively solved in traditional routing protocols.

In this talk, we identify pros and cons of different hybrid SDN models, where SDN and distributed packet forwarding paradigms coexist. The coexistence models span (i) topology-based paradigm separation, (ii) service-based paradigm separation and (iii) paradigm integration. Further, we discuss how we envision that network hybridization can help mitigating major concerns of SDN. On the one hand, it enables retro-compatibility and incremental deployment, hence providing means and incentives to transition to SDN. On the other hand, in our vision, it can represent an interesting network design point that combines the advantages of SDN and traditional paradigms. Unfortunately, due to their intrinsic heterogeneity, hybrid networks present new management and interoperability challenges. We describe techniques to realize network management abstractions, like declarative traffic control and consistent update, that hide the complexity of hybrid networks, and guarantee network-wide correctness.

Stefano Vissicchio received his Master degree in computer science from the Roma Tre University in 2008, and the Ph.D. degree from the same institution in April 2012. He collaborated with the Italian research network ISP (Consortium GARR) between 2011 and 2012. Currently, he holds a postdoctoral position at the University of Louvain (UCL) in Belgium. His research interests are mainly focused on Software Defined Networking, network management and routing.

Should we secure routing with the RPKI

Date and Time
Thursday, September 19, 2013 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Host
Jennifer Rexford
----Please note change in location----

In this talk I will overview the benefits and risks of adopting the Resource Public Key Infrastructure (RPKI), a new centralized security infrastructure for interdomain routing that has recently been standardized by the IETF. On one hand, I argue that the RPKI is one of the most effective ways to limit attacks on interdomain routing; more so, in fact, than more advanced cryptographic solutions that require more drastic changes to router hardware and protocol messages. On the other hand, I discuss how state-sponsored actors and malicious attackers can exploit the RPKI's centralized architecture to launch new attacks that can cause serious harm to the Internet's routing system. I conclude by discussing open problems that should be solved before the RPKI is widely adopted.

Based on works with Robert Lychev, Pete Hummon, Jennifer Rexford, and Michael Schapira that appeared at SIGCOMM'10 and SIGCOMM'13, and work in progress with Kyle Brogle, Danny Cooper, Ethan Heilman, and Leonid Reyzin.

Sharon Goldberg is an Assistant Professor in the Department of Computer Science at Boston University. Her research focuses on finding practical solutions to problems in network security. She received her Ph.D. from Princeton University in 2009, her B.A.Sc. from the University of Toronto in 2003, and has worked as a researcher at IBM, Cisco, and Microsoft, and a telecommunication engineer at Bell Canada and Hydro One Networks.

Gaining Control of Cellular Traffic Accounting by Spurious TCP Retransmission

Date and Time
Thursday, September 19, 2013 - 12:00pm to 1:30pm
Location
Computer Science 402
Type
Talk
Host
Vivek Pai
Packet retransmission is a fundamental TCP feature that ensures reliable data transfer between two end nodes. Interestingly, when it comes to cellular data accounting, TCP retransmission creates an important policy issue. Cellular ISPs might argue that all retransmitted IP packets should be accounted for billing since they consume the resources of their infrastructures. On the other hand, the service subscribers might want to pay only for the application data by taking out the amount for retransmission. Regardless of the policies, however, we find that TCP retransmission can be easily abused to manipulate the current practice of cellular traffic accounting.

In this work, we investigate the TCP retransmission accounting policies of 12 cellular ISPs in the world and report the accounting vulnerabilities with TCP retransmission attacks. First, we find that cellular data accounting policies vary from ISP to ISP. While the majority of cellular ISPs blindly account for every IP packet, some ISPs intentionally remove the retransmission packets from the user bill for fairness. Second, we show that it is easy to launch the “usage-inflation”attack on the ISPs that blindly account for every IP packet. In our experiments, we could inflate the usage up to the monthly limit only within 9 minutes of the attack completely without the knowledge of the subscriber. For those ISPs that do not account for retransmission, we successfully launch the “free-riding”attack by tunneling the payload under fake TCP headers that look like retransmission. To counter the attacks, we argue that the ISPs should consider ignoring TCP retransmission for billing while detecting the tunneling attacks by deep packet inspection. We implement and evaluate Abacus, a light-weight accounting system that reliably detects “free-riding”attacks even in the 10 Gbps links.

Younghwan Go is currently a Ph.D. student at KAIST. His research interests are networked and distributed systems, network security and mobile network. He received a M.S. degree in Electrical Engineering and Information Security from KAIST in 2013. Previously, he received a Bachelor's degree in Electrical Engineering from KAIST in 2011.

PoiRoot: Investigating the Root Cause of Interdomain Path Changes

Date and Time
Tuesday, August 6, 2013 - 11:00am to 12:00pm
Location
Computer Science 402
Type
Talk
Host
Jennifer Rexford
Interdomain path changes occur frequently. Because routing protocols expose insufficient information to reason about all changes, the general problem of identifying the root cause remains unsolved. In this work, we design and evaluate PoiRoot, a real-time system that allows a provider to accurately isolate the root cause (the network responsible) of path changes affecting its prefixes. First, we develop a new model describing path changes and use it to provably identify the set of all potentially responsible networks. Next, we develop a recursive algorithm that accurately isolates the root cause of any path change. We observe that the algorithm requires monitoring paths that are generally not visible using standard measurement tools. To address this limitation, we combine existing measurement tools in new ways to acquire path information required for isolating the root cause of a path change. We evaluate PoiRoot on path changes obtained through controlled Internet experiments, simulations, and �in the wild� measurements. We demonstrate that Poi- Root is highly accurate, works well even with partial information, and generally narrows down the root cause to a single network or two neighboring ones. On controlled experiments PoiRoot is 100% accurate, as opposed to prior work which is accurate only 61.7% of the time.

Don't Drop, Detour!

Date and Time
Thursday, July 25, 2013 - 2:00pm to 2:30pm
Location
Computer Science 402
Type
Talk
Today's data centers must support a range of workloads with different demands. While existing approaches handle routine traffic smoothly, ephemeral but intense hotspots cause excessive packet loss and severely degrade performance. This loss occurs even though the congestion is typically highly localized, with spare buffer capacity available at nearby switches.

In this paper, we argue that switches should share buffer capacity to effectively handle this spot congestion without the latency or monetary hit of deploying large buffers at individual switches. Specifically, we present DIBS, a mechanism that achieves a near lossless network without requiring additional buffers. Using DIBS, a congested switch detours excess packets

to neighboring switches to avoid dropping them. We implement DIBS in hardware, on software routers in a testbed, and in simulation, and we demonstrate that it reduces the 99th percentile of query completion time by 85%, with very little impact on background traffic.

Flexible and Efficient Dynamic Software Updating for C

Date and Time
Thursday, July 11, 2013 - 11:30am to 12:30pm
Location
Computer Science 402
Type
Talk
Dynamic software updating (DSU) systems allow programs to be updated while running, thereby allowing developers to add features and fix bugs without downtime. No-downtime fixes are particular important for security-critical systems (e.g., IDS/IPS appliances), and for security patches (e.g., to server infrastructure). In this talk I will present Kitsune, a new DSU system for C whose design has three notable features. First, Kitsune's updating mechanism updates the whole program, not individual functions. This mechanism is more flexible than most prior approaches and places no restrictions on data representations or allowed compiler optimizations. Second, Kitsune makes the important aspects of updating explicit in the program text, making its semantics easy to understand while keeping programmer work to a minimum. Finally, the programmer can write simple specifications to direct Kitsune to generate code that traverses and transforms old-version state for use by the new code; such state transformation is often necessary, and is significantly more difficult in prior DSU systems. We have used Kitsune to update six popular, open-source, single- and multi-threaded programs, and find that few program changes are required to use Kitsune, and that it incurs essentially no performance overhead.
Follow us: Facebook Twitter Linkedin