Quick links

Defending Against Internet Censorship and Control, from Firewalls to Filter Bubbles

Date and Time
Tuesday, November 26, 2013 - 3:00pm to 4:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Speaker
Host
Jennifer Rexford
The Internet's promise of open communication and transparency is threatened by both censorship (blocking communication outright) and manipulation (otherwise affecting the performance that a user experiences or the information that a user sees). Censorship is a pervasive threat, with more than 60 countries around the world censoring Internet communications in some form. Unfortunately, many conventional censorship circumvention tools are detectable and can be blocked; in some cases, the use of such software may even be incriminating. Thus, users may need not only to defeat censorship mechanisms but also to hide the fact that they are doing so in the first place.

In the first part of the talk, I will describe two systems that achieve this goal, Infranet and Collage. In addition to circumventing censorship firewalls, both Infranet and Collage provide users with the deniability that they are using the censorship circumvention system in the first place. In both systems, we achieve deniability by designing them so that the observable network traffic that they generate is statistically indistinguishable from the user's "normal" traffic patterns if the tools were not in use. Infranet achieves deniability by hiding a user's requests for Web content in other Web traffic that resembles a user's typical browsing pattern. Collage achieves deniability by hiding content in user-generated content sites and encoding a user's request for the content in a sequence of operations that a given user is likely to perform.

In recent years, however, circumvention alone is insufficient: sophisticated organizations can also control users by manipulating network traffic. Manipulation can take many forms, from degrading performance to such an extent that a user does not want to use the service, to using Internet communication to generate propaganda (e.g., via social media), to attacking personalization algorithms to affect the results that a user sees in response to a search query. In the second part of the talk, I will describe several manipulation attacks that we have studied and inference techniques that we have developed to detect them. I will conclude with our ongoing efforts to tackle open challenges in this area, such as the deceptively challenging problem of confirming the existence of various forms of censorship in the first place.

Nick Feamster is an associate professor in the College of Computing at Georgia Tech. He received his Ph.D. in Computer science from MIT in 2005, and his S.B. and M.Eng. degrees in Electrical Engineering and Computer Science from MIT in 2000 and 2001, respectively. His research focuses on many aspects of computer networking and networked systems, with a focus on network operations, network security, and censorship-resistant communication systems. In December 2008, he received the Presidential Early Career Award for Scientists and Engineers (PECASE) for his contributions to cybersecurity, notably spam filtering. His honors include the Technology Review 35 "Top Young Innovators Under 35" award, the ACM SIGCOMM Rising Star Award, a Sloan Research Fellowship, the NSF CAREER award, the IBM Faculty Fellowship, the IRTF Applied Networking Research Prize, and award papers at the SIGCOMM Internet Measurement Conference (measuring Web performance bottlenecks), SIGCOMM (network-level beh avior of spammers), the NSDI conference (fault detection in router configuration), Usenix Security (circumventing web censorship using Infranet), and Usenix Security (web cookie analysis).

Follow us: Facebook Twitter Linkedin