Quick links

CS Department Colloquium Series

Making proof-based verified computation almost practical

Date and Time
Friday, October 26, 2012 - 3:30pm to 4:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Michael Freedman
How can a machine specify a computation to another one and then, without executing the computation, check that the other machine carried it out correctly? And how can this be done without assumptions about the performer (replication, trusted hardware, etc.) or restrictions on the computation? This is the problem of _verified computation_, and it is motivated by the cloud and other third-party computing models. It has long been known that (1) this problem can be solved in theory using probabilistically checkable proofs (PCPs) coupled with cryptographic tools, and (2) the performance of these solutions is wildly impractical (trillions of CPU-years or more to verify simple computations).

I will describe a project at UT Austin that challenges the second piece of this folklore. We have developed an interactive protocol that begins with an efficient argument [IKO CCC '07] and incorporates new theoretical work to improve performance by 20+ orders of magnitude. In addition, we have broadened the computational model from Boolean circuits to a general-purpose model. We have fully implemented the system, accelerated it with GPUs, and developed a compiler that transforms computations expressed in a high-level language to executables that implement the protocol entities.

The resulting system, while not quite ready for the big leagues, is close enough to practicality to suggest that in the near future, PCPs could be a real tool for building actual systems.

Talk will be based on these papers and ongoing work:

Michael Walfish is an assistant professor in the Computer Science Department at The University of Texas at Austin. His research interests are in systems, security, and networking. His honors include an Air Force Young Investigator Award, an NSF CAREER Award, a Sloan Research Fellowship, a Teaching Excellence Award from the UT College of Natural Sciences, the Intel Early Career Faculty Honor Program, and the UT Society for Teaching Excellence. He received his B.A. from Harvard and his Ph.D. from MIT, both in Computer Science.

The Power of Examples

Date and Time
Thursday, October 11, 2012 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Adam Finkelstein
Designers in many fields rely on examples for inspiration, and examples play an important role in art and design curricula. In this talk, I'll describe several ways that examples help the creative process by illustrating concepts and alternatives. Online media offer a corpus of examples at a scale and diversity never before seen. This wealth of examples opens up new possibilities, but also poses several challenges. How can we leverage these resources? My group's research tools harvest and synthesize examples to empower more people to design interactive systems, learners to acquire new skills, experts to be more creative, and programmers to engage in more design thinking. This research shapes my project-based design teaching, which emphasizes creating diverse alternatives, self-assessment, and using examples to provide design insights and teach abstract principles. This spring, we collaborated with Coursera to launch the first massive-scale class with self and peer assessment. This enabled online students to engage in open-ended design projects, and it worked surprisingly well.

Scott is an Associate Professor of Computer Science at Stanford University. He co-directs the Human-Computer Interaction Group and holds the Bredt Faculty Scholar development chair. Organizations around the world use his lab's open-source design tools and curricula; several books and popular press articles have covered his research and teaching. He has been awarded the Katayanagi Emerging Leadership Prize, Sloan Fellowship, NSF CAREER award, Microsoft Research New Faculty Fellowship. He has authored and co-authored more than 40 peer-reviewed articles; six have been awarded best paper or honorable mention at the premier HCI conferences (CHI and UIST). His former graduate students are leading professors, researchers, founders, social entrepeneurs, and engineers. He has a dual BA in Art-Semiotics and Computer Science from Brown University, Graphic Design work at RISD, and an MS and PhD in Computer Science from UC Berkeley. He serves on the editorial board of TOCHI and HCI, was the program co-chair of UIST 2011, and co-chaired the systems area of CHI 2010. He helped introduce peer assessment to open online education, and taught the first peer-assessed online course.

Full Duplex Wireless

Date and Time
Friday, October 12, 2012 - 3:30pm to 4:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Michael Freedman
Wireless networking traditionally assumes that radios are half-duplex. On a given frequency, a half-duplex radio can either transmit or receive, but not both at the same time. I present recent results demonstrating that a full-duplex radio -- a radio that can receive and transmit simultaneously on the same frequency -- can be built using commodity, off-the-shelf components. I discuss some of the possible implications of full duplex, including solving some long-standing problems in wireless, such as the hidden terminal problem, signal boosters, and access point fairness. I examine the design space for full duplex radios, describing the corresponding tradeoffs. Full duplex has the potential to revolutionize a large number of wireless systems: I'll conclude with current strengths and limitations of the technology to try to shed some light on where I think it might be most and least successful.

The talk is intended for a CS audience; it does not assume any RF theory background, just a high school level understanding of the physics and mathematics of sine waves.

Philip Levis is an Associate Professor of Computer Science and Electrical Engineering at Stanford University. He received his Sc.B. from Brown University in 1999, his M.S. from the University of Colorado at Boulder in 2001, and his Ph.D. from UC Berkeley in 2005. In 2008 he received an NSF CAREER award and a Microsoft Research New Faculty Fellowship. He researches the design and implementation of networked systems that interact with the world, including operating systems and protocols for embedded wireless devices, wireless mesh protocols, network infrastructure for virtual worlds, and energy efficient computing. The results of his research, including the TinyOS operating system, nesC language, Trickle algorithm, and the collection tree protocol (CTP), have been adopted by tens of thousands of users and researchers worldwide. He's authored a few Internet standards based on his work. He really likes excellent engineering and has a self-destructive aversion to low-hanging fruit.

When GPUs meet CPUs: opportunities, challenges and solutions in heterogeneous architectures

Date and Time
Wednesday, October 3, 2012 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Margaret Martonosi
The last decade has seen a paradigm shift in the architecture of computing platforms: Uni-processors have given way to multi-core (many-core) processors, and now the industry is moving toward heterogeneous architectures that combine CPUs and GPUs on the same chip, as we can see from Intel, AMD, NVIDIA, etc.

Heterogeneous architectures are especially attractive as they can provide high performance and energy-efficiency for both general-purpose applications as well as high throughput applications. However, these architectures introduce several new challenges: including programming, determining power and performance trade-offs and developing hardware solutions that exploit the underlying heterogeneity.

In this talk I will present some of our recent work that reduces the software effort required in programming such architectures, and provides hints to estimate the performance and power behavior of CPUs , GPUs or CPUs+GPUs. I will also discuss architecture solutions that improve overall system performance by taking into account the difference in characteristics of CPU and GPU applications, and optimizing the cache partitioning, prefetching, and DRAM scheduling to best suit the workload needs.

Hyesoon Kim is an Assistant professor in the School of Computer Science at Georgia Institute of Technology. Her research interests include high-performance energy-efficient heterogeneous architectures, programmer-compiler-microarchitecture interaction and developing tools to help parallel programming. She received a BA in mechanical engineering from Korea Advanced Institute of Science and Technology (KAIST), an MS in mechanical engineering from Seoul National University, and an MS and a Ph.D in computer engineering at The University of Texas at Austin. She is a recipient of the NSF career award in 2011.

Mapping connectomes with crowd and machine intelligence

Date and Time
Tuesday, May 15, 2012 - 4:30pm to 5:30pm
Location
Friend Center 008
Type
CS Department Colloquium Series
Host
David Blei
A connectome is a map of a nervous system in the form of a directed graph in which nodes represent neurons and edges represent synapses. The ability to rapidly map connectomes could arguably revolutionize neuroscience, much as genomics has impacted biology. However, the only connectome known in its entirety is that of the roundworm C. elegans. A mere 7000 connections between 300 neurons took over a dozen years of labor to map in the 1970s and 80s. Fortunately, technological advances are speeding up the mapping of connectomes. New multibeam scanning electron microscopes will soon generate a petabyte of image data from a cubic millimeter of brain tissue every two weeks. From such images, it should be possible to map every connection between neurons in the volume---in principle. Unfortunately, it could take up to a million years for a single person to carry out this feat manually. Clearly, our capacity to acquire "big data" from the brain has far outpaced our ability to analyze it. My lab has been developing computational technologies to deal with this data deluge. We have invented the first machine learning methods based on genuine measures of image segmentation performance, and have applied these to create artificial intelligence (AI) for tracing the "wires" of the brain, the branches of neurons. We have also developed methods of recruiting, training, and aggregating the "wisdom of crowds" to work with the AI. Both machine and crowd intelligence are harnessed by EyeWire, an online community of laypersons who map connectomes by playing a game of coloring neural images.

Security and Privacy for the Smart Grid

Date and Time
Thursday, May 17, 2012 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Jennifer Rexford
The electric power grid pre-dates computer and digital networking technologies. As these technologies have progressed the power grid has shifted from electro-mechanical devices toward networked computers as a foundation for intelligence and control. This trend has culminated in the concept of the "Smart Grid", which envisions the use of networked computers to transform the reliability and efficiency of power delivery. While providing many benefits, this change also invites some of the security and privacy threats that have plagued networked computers in other application areas. This talk will look at a selection of research efforts to address these threats in various power grid environments. These include the home environment, where the Smart Grid brings new ideas for monitoring power in homes to allow consumers to participate more directly in energy markets, and the substation environment, where automation improves remote management through high-speed substation networks. Topics will include remote attestation for electric power meters, verifying responses to load shed instructions, using hardware to provide code white-listing, secure multicast IPsec, and meter infrastructure for rural regions based on cognitive radio and white spaces. The talk will also summarize some of the emerging research questions for the Smart Grid that determine likely threats to security and privacy.

Carl A. Gunter is a professor in the Computer Science Department of the University of Illinois. He is director of Illinois Security Lab and director of the HHS Strategic Healthcare IT Advanced Research Projects on Security (SHARPS). He has made research contributions in the semantics of programming languages, formal analysis of networks and security, and privacy. His recent work concerns security and privacy issues for the power grid and healthcare information technology. He is the author of more than 140 scientific research publications and patents and a textbook on semantics of programming languages published by MIT Press. He is a founder of Probaris Technologies, a company that provides identity management technologies.

Finding Malware on a Web Scale

Date and Time
Monday, May 7, 2012 - 1:30am to Wednesday, April 25, 2012 - 2:30am
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Michael Freedman
Over the last several years, JavaScript malware has emerged as one of the most popular ways to deliver drive-by attacks to unsuspecting users through the browser. This talk covers recent Microsoft Research experiences with finding malware on the web. It highlights two tools: Nozzle and Zozzle. Nozzle is a runtime malware detector that focuses on finding heap spraying attacks. Zozzle is a mostly static detector that finds heap sprays and other types of JavaScript malware. Both are extremely precise: Nozzle false positive rate is close to one in a billion; Zozzle's is about one in a million.

Both are deployed by Bing and are used daily to find thousands of malicious web sites. This talk will focus on interesting interplay between static and runtime analysis and cover what it takes to migrate research ideas into real-world products.

Ben Livshits is a researcher at Microsoft Research in Redmond, WA and an affiliate professor at the University of Washington. Originally from St. Petersburg, Russia, he received a bachelor's degree in Computer Science and Math from Cornell University in 1999, and his M.S. and Ph.D. in Computer Science from Stanford University in 2002 and 2006, respectively. Dr. Livshits' research interests include application of sophisticated static and dynamic analysis techniques to finding errors in programs.

Ben has published papers at PLDI, POPL, Oakland Security, Usenix Security, CCS, SOSP, ICSE, FSE, and many other venues. He is known for his work in software reliability and especially tools to improve software security, with a primary focus on approaches to finding buffer overruns in C programs and a variety of security vulnerabilities (cross-site scripting, SQL injections, etc.) in Web-based applications. He is the author of several dozen academic papers and patents. Lately he has been focusing on how Web 2.0 application and browser reliability, performance, and security can be improved through a combination of static and runtime techniques. Ben generally does not speak of himself in the third person.

Certifiable Quantum Dice

Date and Time
Tuesday, March 27, 2012 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Sanjeev Arora
A source of independent and uniform random bits is a basic resource in many computational tasks, such as cryptography, game theoretic protocols, algorithms and physical simulations. In the classical world it is impossible to construct a random number generator whose output can be certified to be a uniformly random n-bit string, since there seems to be no basis on which to reject any particular output in favor of any other. Quantum mechanics allows for a remarkable random number generator: its output is certifiably random in the sense that if the output passes a simple statistical test, and a no-signaling condition is met between the two boxes in the randomness generating device (based, say, on the speed of light limit imposed by special relativity), even a quantum skeptic (viz Einstein's famous quote ``God does not play dice with the Universe''), would be convinced that the output is truly random, independent of the validity of quantum mechanics. Based on joint work with Thomas Vidick.

Embracing Interference in Wireless Systems

Date and Time
Monday, March 26, 2012 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Jennifer Rexford
The wireless medium is a shared resource. If nearby devices transmit at the same time, the transmitted signals interfere, resulting in a collision. In traditional networks, collisions cause the loss of the transmitted information. For this reason, wireless systems have been designed with the assumption that interference is intrinsically harmful and must be avoided.

My research takes an alternate approach: Instead of viewing interference as an inherently counterproductive phenomenon that should to be avoided, I design practical systems that can successfully reconstruct the transmitted information even in the presence of collisions; hence, rendering the interference harmless. Moreover, these new systems can exploit interference constructively to increase throughput and improve security.

In the talk, I will present the first WiFi receiver that decodes colliding packets, rendering WiFi interference harmless. I will also show how to inject useful interference that increases network throughput using a system called analog network coding. Then, I will talk about the role of interference in improving security. I will use interference to secure insecure medical implants, and establish secure wireless connections without having users enter passwords or use pre-shared secret keys.

Shyamnath Gollakota is a PhD candidate in Electrical Engineering and Computer Science at the Massachusetts Institute of Technology. His research in networking focuses on addressing wireless interference and security. He has been awarded the ACM SIGCOMM 2008 Best paper award for ZigZag decoding, ACM SIGCOMM 2011 Best Paper Award for securing medical implants, and AT&T Applied Security Award for password-free wireless security. His work has appeared in venues like Slashdot, BBC Radio, Forbes, and Network World. He received his masters in Electrical Engineering and Computer Science at MIT, and a bachelors degree in Computer Science and Engineering at IIT Madras.

Data Privacy Technologies: From Alchemy to an Engineering Discipline

Date and Time
Thursday, March 8, 2012 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Established practices for data privacy focus on simplistic transformations such as the removal of “personally identifiable information.” On the other hand, academia has produced a long line of work on privacy-preserving computation that has yet to be translated into practice. I envision privacy technologies as an engineering discipline grounded in a solid understanding of what technological mechanisms can and cannot do.

In this talk I will describe my past, ongoing and planned work towards this goal. The first part of this research program — and the main topic of my doctoral work — has been to demonstrate the inadequacy of the current paradigm by developing reidentification and statistical inference algorithms for various types of “anonymized” data: our preferences, transactions, social relationships, and behavior. The second part is to develop an approach to building systems based on lightweight cryptography, a hybrid of centralized and decentralized architectures, and incorporation of policy-based defenses. I will describe how I have applied these principles to my work on location privacy and behavioral ad targeting.

Arvind Narayanan is a post-doctoral computer science researcher at Stanford and a junior affiliate scholar at the Stanford Law School Center for Internet and Society. He completed his Ph.D at UT Austin in 2009. Narayanan studies information privacy and security, and moonlights in policy. His paper on deanonymization of large datasets won the 2008 Privacy Enhancing Technologies award and his 2011 paper on location privacy at NDSS won the distinguished paper award.

Follow us: Facebook Twitter Linkedin