Quick links

CS Department Colloquium Series

Full Duplex Wireless

Date and Time
Friday, October 12, 2012 - 3:30pm to 4:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Michael Freedman
Wireless networking traditionally assumes that radios are half-duplex. On a given frequency, a half-duplex radio can either transmit or receive, but not both at the same time. I present recent results demonstrating that a full-duplex radio -- a radio that can receive and transmit simultaneously on the same frequency -- can be built using commodity, off-the-shelf components. I discuss some of the possible implications of full duplex, including solving some long-standing problems in wireless, such as the hidden terminal problem, signal boosters, and access point fairness. I examine the design space for full duplex radios, describing the corresponding tradeoffs. Full duplex has the potential to revolutionize a large number of wireless systems: I'll conclude with current strengths and limitations of the technology to try to shed some light on where I think it might be most and least successful.

The talk is intended for a CS audience; it does not assume any RF theory background, just a high school level understanding of the physics and mathematics of sine waves.

Philip Levis is an Associate Professor of Computer Science and Electrical Engineering at Stanford University. He received his Sc.B. from Brown University in 1999, his M.S. from the University of Colorado at Boulder in 2001, and his Ph.D. from UC Berkeley in 2005. In 2008 he received an NSF CAREER award and a Microsoft Research New Faculty Fellowship. He researches the design and implementation of networked systems that interact with the world, including operating systems and protocols for embedded wireless devices, wireless mesh protocols, network infrastructure for virtual worlds, and energy efficient computing. The results of his research, including the TinyOS operating system, nesC language, Trickle algorithm, and the collection tree protocol (CTP), have been adopted by tens of thousands of users and researchers worldwide. He's authored a few Internet standards based on his work. He really likes excellent engineering and has a self-destructive aversion to low-hanging fruit.

When GPUs meet CPUs: opportunities, challenges and solutions in heterogeneous architectures

Date and Time
Wednesday, October 3, 2012 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Margaret Martonosi
The last decade has seen a paradigm shift in the architecture of computing platforms: Uni-processors have given way to multi-core (many-core) processors, and now the industry is moving toward heterogeneous architectures that combine CPUs and GPUs on the same chip, as we can see from Intel, AMD, NVIDIA, etc.

Heterogeneous architectures are especially attractive as they can provide high performance and energy-efficiency for both general-purpose applications as well as high throughput applications. However, these architectures introduce several new challenges: including programming, determining power and performance trade-offs and developing hardware solutions that exploit the underlying heterogeneity.

In this talk I will present some of our recent work that reduces the software effort required in programming such architectures, and provides hints to estimate the performance and power behavior of CPUs , GPUs or CPUs+GPUs. I will also discuss architecture solutions that improve overall system performance by taking into account the difference in characteristics of CPU and GPU applications, and optimizing the cache partitioning, prefetching, and DRAM scheduling to best suit the workload needs.

Hyesoon Kim is an Assistant professor in the School of Computer Science at Georgia Institute of Technology. Her research interests include high-performance energy-efficient heterogeneous architectures, programmer-compiler-microarchitecture interaction and developing tools to help parallel programming. She received a BA in mechanical engineering from Korea Advanced Institute of Science and Technology (KAIST), an MS in mechanical engineering from Seoul National University, and an MS and a Ph.D in computer engineering at The University of Texas at Austin. She is a recipient of the NSF career award in 2011.

Mapping connectomes with crowd and machine intelligence

Date and Time
Tuesday, May 15, 2012 - 4:30pm to 5:30pm
Location
Friend Center 008
Type
CS Department Colloquium Series
Host
David Blei
A connectome is a map of a nervous system in the form of a directed graph in which nodes represent neurons and edges represent synapses. The ability to rapidly map connectomes could arguably revolutionize neuroscience, much as genomics has impacted biology. However, the only connectome known in its entirety is that of the roundworm C. elegans. A mere 7000 connections between 300 neurons took over a dozen years of labor to map in the 1970s and 80s. Fortunately, technological advances are speeding up the mapping of connectomes. New multibeam scanning electron microscopes will soon generate a petabyte of image data from a cubic millimeter of brain tissue every two weeks. From such images, it should be possible to map every connection between neurons in the volume---in principle. Unfortunately, it could take up to a million years for a single person to carry out this feat manually. Clearly, our capacity to acquire "big data" from the brain has far outpaced our ability to analyze it. My lab has been developing computational technologies to deal with this data deluge. We have invented the first machine learning methods based on genuine measures of image segmentation performance, and have applied these to create artificial intelligence (AI) for tracing the "wires" of the brain, the branches of neurons. We have also developed methods of recruiting, training, and aggregating the "wisdom of crowds" to work with the AI. Both machine and crowd intelligence are harnessed by EyeWire, an online community of laypersons who map connectomes by playing a game of coloring neural images.

Security and Privacy for the Smart Grid

Date and Time
Thursday, May 17, 2012 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Jennifer Rexford
The electric power grid pre-dates computer and digital networking technologies. As these technologies have progressed the power grid has shifted from electro-mechanical devices toward networked computers as a foundation for intelligence and control. This trend has culminated in the concept of the "Smart Grid", which envisions the use of networked computers to transform the reliability and efficiency of power delivery. While providing many benefits, this change also invites some of the security and privacy threats that have plagued networked computers in other application areas. This talk will look at a selection of research efforts to address these threats in various power grid environments. These include the home environment, where the Smart Grid brings new ideas for monitoring power in homes to allow consumers to participate more directly in energy markets, and the substation environment, where automation improves remote management through high-speed substation networks. Topics will include remote attestation for electric power meters, verifying responses to load shed instructions, using hardware to provide code white-listing, secure multicast IPsec, and meter infrastructure for rural regions based on cognitive radio and white spaces. The talk will also summarize some of the emerging research questions for the Smart Grid that determine likely threats to security and privacy.

Carl A. Gunter is a professor in the Computer Science Department of the University of Illinois. He is director of Illinois Security Lab and director of the HHS Strategic Healthcare IT Advanced Research Projects on Security (SHARPS). He has made research contributions in the semantics of programming languages, formal analysis of networks and security, and privacy. His recent work concerns security and privacy issues for the power grid and healthcare information technology. He is the author of more than 140 scientific research publications and patents and a textbook on semantics of programming languages published by MIT Press. He is a founder of Probaris Technologies, a company that provides identity management technologies.

Finding Malware on a Web Scale

Date and Time
Monday, May 7, 2012 - 1:30am to Wednesday, April 25, 2012 - 2:30am
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Michael Freedman
Over the last several years, JavaScript malware has emerged as one of the most popular ways to deliver drive-by attacks to unsuspecting users through the browser. This talk covers recent Microsoft Research experiences with finding malware on the web. It highlights two tools: Nozzle and Zozzle. Nozzle is a runtime malware detector that focuses on finding heap spraying attacks. Zozzle is a mostly static detector that finds heap sprays and other types of JavaScript malware. Both are extremely precise: Nozzle false positive rate is close to one in a billion; Zozzle's is about one in a million.

Both are deployed by Bing and are used daily to find thousands of malicious web sites. This talk will focus on interesting interplay between static and runtime analysis and cover what it takes to migrate research ideas into real-world products.

Ben Livshits is a researcher at Microsoft Research in Redmond, WA and an affiliate professor at the University of Washington. Originally from St. Petersburg, Russia, he received a bachelor's degree in Computer Science and Math from Cornell University in 1999, and his M.S. and Ph.D. in Computer Science from Stanford University in 2002 and 2006, respectively. Dr. Livshits' research interests include application of sophisticated static and dynamic analysis techniques to finding errors in programs.

Ben has published papers at PLDI, POPL, Oakland Security, Usenix Security, CCS, SOSP, ICSE, FSE, and many other venues. He is known for his work in software reliability and especially tools to improve software security, with a primary focus on approaches to finding buffer overruns in C programs and a variety of security vulnerabilities (cross-site scripting, SQL injections, etc.) in Web-based applications. He is the author of several dozen academic papers and patents. Lately he has been focusing on how Web 2.0 application and browser reliability, performance, and security can be improved through a combination of static and runtime techniques. Ben generally does not speak of himself in the third person.

Certifiable Quantum Dice

Date and Time
Tuesday, March 27, 2012 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Sanjeev Arora
A source of independent and uniform random bits is a basic resource in many computational tasks, such as cryptography, game theoretic protocols, algorithms and physical simulations. In the classical world it is impossible to construct a random number generator whose output can be certified to be a uniformly random n-bit string, since there seems to be no basis on which to reject any particular output in favor of any other. Quantum mechanics allows for a remarkable random number generator: its output is certifiably random in the sense that if the output passes a simple statistical test, and a no-signaling condition is met between the two boxes in the randomness generating device (based, say, on the speed of light limit imposed by special relativity), even a quantum skeptic (viz Einstein's famous quote ``God does not play dice with the Universe''), would be convinced that the output is truly random, independent of the validity of quantum mechanics. Based on joint work with Thomas Vidick.

Embracing Interference in Wireless Systems

Date and Time
Monday, March 26, 2012 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Jennifer Rexford
The wireless medium is a shared resource. If nearby devices transmit at the same time, the transmitted signals interfere, resulting in a collision. In traditional networks, collisions cause the loss of the transmitted information. For this reason, wireless systems have been designed with the assumption that interference is intrinsically harmful and must be avoided.

My research takes an alternate approach: Instead of viewing interference as an inherently counterproductive phenomenon that should to be avoided, I design practical systems that can successfully reconstruct the transmitted information even in the presence of collisions; hence, rendering the interference harmless. Moreover, these new systems can exploit interference constructively to increase throughput and improve security.

In the talk, I will present the first WiFi receiver that decodes colliding packets, rendering WiFi interference harmless. I will also show how to inject useful interference that increases network throughput using a system called analog network coding. Then, I will talk about the role of interference in improving security. I will use interference to secure insecure medical implants, and establish secure wireless connections without having users enter passwords or use pre-shared secret keys.

Shyamnath Gollakota is a PhD candidate in Electrical Engineering and Computer Science at the Massachusetts Institute of Technology. His research in networking focuses on addressing wireless interference and security. He has been awarded the ACM SIGCOMM 2008 Best paper award for ZigZag decoding, ACM SIGCOMM 2011 Best Paper Award for securing medical implants, and AT&T Applied Security Award for password-free wireless security. His work has appeared in venues like Slashdot, BBC Radio, Forbes, and Network World. He received his masters in Electrical Engineering and Computer Science at MIT, and a bachelors degree in Computer Science and Engineering at IIT Madras.

Data Privacy Technologies: From Alchemy to an Engineering Discipline

Date and Time
Thursday, March 8, 2012 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Established practices for data privacy focus on simplistic transformations such as the removal of “personally identifiable information.” On the other hand, academia has produced a long line of work on privacy-preserving computation that has yet to be translated into practice. I envision privacy technologies as an engineering discipline grounded in a solid understanding of what technological mechanisms can and cannot do.

In this talk I will describe my past, ongoing and planned work towards this goal. The first part of this research program — and the main topic of my doctoral work — has been to demonstrate the inadequacy of the current paradigm by developing reidentification and statistical inference algorithms for various types of “anonymized” data: our preferences, transactions, social relationships, and behavior. The second part is to develop an approach to building systems based on lightweight cryptography, a hybrid of centralized and decentralized architectures, and incorporation of policy-based defenses. I will describe how I have applied these principles to my work on location privacy and behavioral ad targeting.

Arvind Narayanan is a post-doctoral computer science researcher at Stanford and a junior affiliate scholar at the Stanford Law School Center for Internet and Society. He completed his Ph.D at UT Austin in 2009. Narayanan studies information privacy and security, and moonlights in policy. His paper on deanonymization of large datasets won the 2008 Privacy Enhancing Technologies award and his 2011 paper on location privacy at NDSS won the distinguished paper award.

Computation meets Statistics: Trade-offs and fundamental limits

Date and Time
Tuesday, February 28, 2012 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Robert Schapire
The past decade has seen the emergence of datasets of an unprecedented scale, with both large sample sizes and dimensionality. Massive data sets arise in various domains, among them computer vision, natural language processing, computational biology, social networks analysis and recommendation systems, to name a few. In many such problems, the bottleneck is not just the number of data samples, but also the computational resources available to process the data. Thus, a fundamental goal in these problems is to characterize how estimation error behaves as a function of the sample size, number of parameters, and the computational budget available.

In this talk, I present two research threads that provide complementary lines of attack on this broader research agenda: lower bounds for statistical estimation with computational constraints, and (ii) distributed algorithms for statistical inference. The first characterizes fundamental limits in a uniform sense over all methods, whereas the latter provides explicit algorithms that exploits the interaction of computational and statistical considerations in a distributed computing environment.

[Joint work with John Duchi, Pradeep Ravikumar, Peter Bartlett and Martin Wainwright]

Alekh Agarwal is a fifth year PhD student at UC Berkeley, jointly advised by Peter Bartlett and Martin Wainwright. Alekh has received PhD fellowships from Microsoft Research and Google. His main research interests are in the areas of machine learning, convex optimization, high-dimensional statistics, distributed machine learning and understanding the computational trade-offs in machine learning problems.

Software Testing and Verification with Grammatical Inference

Date and Time
Monday, March 12, 2012 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
David Walker
Recent progress in automated deductive reasoning techniques has triggered a small scientific revolution in automated software testing, verification, and security. Indeed, it is becoming increasingly difficult to find new automated software analysis papers that do not use or assume existence of propositional satisfiability and satisfiability-modulo theories solvers.

In this talk, I will discuss the first results of my efforts to answer a simple question: What kind of automated reasoning will trigger the next scientific revolution in automated software analysis? As a potential candidate, I will present grammatical inference (GI), a class of techniques for learning formal languages. By inductively generalizing from examples, GI techniques nicely complement the capabilities of existing (deductive) approaches. Furthermore, the inferred grammars (or automata) are themselves analyzable objects, which can be used as approximations or abstractions of the behavior of the analyzed program.

In this talk, I'll analyze several open problems, namely (1) test generation for programs whose inputs have to adhere to complex grammars, and (2) automated verification of web sanitizers, and offer some solutions. The common trait of the above mentioned problems is that they require exact or approximate understanding of the relation between program's input and output. I'll explain how GI techniques can infer such relations and serve as a basis for inventing new powerful software analyses. In particular, I'll show that GI-based automated testing can find six times more vulnerabilities than the state-of-the-art methods, increasing code coverage by up to 58%. The novel symbolic learning technique that I'll present learns input-output relations of web sanitizers (for subsequent verification) in seconds, while construction of the same relations used to take several hours of a human expert's time in the past.

Domagoj Babic received his Dipl.Ing. in Electrical Engineering and M.Sc. in Computer Science from the Zagreb University (Faculty of Electrical Engineering and Computing) in 2001 and 2003. He received his Ph.D. in Computer Science in 2008 from the University of British Columbia. After spending a bit more than a year in industry, he joined UC Berkeley, as a research scientist.

Domagoj's research focuses on various aspects of software analysis (software verification, testing, and security), decision procedures, grammatical inference, and applied formal methods in general.

He is a recipient of the Canada's NSERC PDF Research Fellowship (2010-2012), Microsoft Graduate Research Fellowship (2005-2007), Fulbright Fellowship (declined to attend UBC), and several awards at international programming competitions (1st place at the 2007 Satisfiability Modulo Theories competition in the bit-vector arithmetic category and 3rd place at the 2005 Satisfiability Testing competition in the satisfiable-crafted instances category).

For more, see:http://www.domagoj.info/

Follow us: Facebook Twitter Linkedin