Quick links

CS Department Colloquium Series

Network Level IoT Security

Date and Time
Friday, November 22, 2019 - 11:00am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Jennifer Rexford

David Hay
Computer networks have undergone and continue to experience a major transformation, whereby billions of low-cost devices are being connected to the network to provide additional functionality and better user experience. Unlike traditional network devices, these devices, collectively known as the ``Internet of Things'' (IoT), typically have very limited computational, memory, and power resources. These IoT devices became a major security concerns, both due to human factors and to technical challenges in deploying security mechanisms on devices with low resources. The number and diversity of IoT devices creates a huge attack surface that is often exploited by attackers to launch large-scale attacks, sometimes exploiting well-known vulnerabilities.

This talk will highlight the security concerns of IoT devices from a networking perspective and explore how to secure IoT devices using whitelists, in which communication between a device and an endpoint is prohibited unless that endpoint appears in the corresponding whitelist.  Finally, we will discuss deployment options for such a solution (namely, within the internet gateway, as virtual network function within the ISP network, or a combination of the two). 

Bio: David Hay is an Associate Professor with the Rachel and Selim Benin School of Computer Science and Engineering, Hebrew University, Jerusalem, Israel. He received the B.A. (summa cum laude) and Ph.D. degrees in computer science from the Technion—Israel Institute of Technology, Haifa, Israel, in 2001 and 2007, respectively. In addition, he was with IBM Haifa Research Labs, Haifa, Israel; Cisco Systems, San Jose, CA, USA; the Electronic Department, Politecnico di Torino, Turin, Italy; and the Electrical Engineering Department with Columbia University, New York, NY, USA. In 2010, he co-founded (with Prof. Brembler-Barr) the DEEPNESS lab, focusing on deep packet inspection in next-generation network devices. He has served as a technical program committee member of numerous networking conferences, and since 2018 serves as en editor of ACM/IEEE Transactions on Networking. His research interests are in computer networks—in particular, network algorithmics, packet classification, deep packet inspection, network survivability and resilience, software-defined networking, network-function virtualization, and various aspects of network security. 

To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

Hardness of Approximation: From the PCP Theorem to the 2-to-2 Games Theorem

Date and Time
Monday, December 9, 2019 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Ran Raz

Subhash Khot
Computer scientists firmly believe that no algorithm can efficiently compute optimal solutions to a class of problems called NP-hard problems. For many NP-hard problems, even computing an approximately optimal solution remains NP-hard. This phenomenon, known as the hardness of approximation, has numerous connections to proof checking, optimization, combinatorics, discrete Fourier analysis, and geometry.

The talk will provide an overview of these connections. It will address why graph coloring is a computationally hard problem, how it is possible to check a proof without even looking at it, why computer scientists love the majority vote, and whether a shape exists that looks spherical as well as cubical. It will explain how all this fits into a 30-year research program starting with the foundational Probabilistically Checkable Proofs (PCP) Theorem and leading to the recent 2-to-2 Games Theorem.

Bio: Subhash Khot is a professor of computer science at the Courant Institute of Mathematical Sciences at New York University. His prior affiliations include Princeton University (PhD), Institute for Advanced Study (member), Georgia Tech (assistant professor) and University of Chicago (visiting professor). His research centers on computational complexity and its connections to geometry and analysis. He is a recipient of the National Science Foundation’s Alan T. Waterman Award, the International Mathematical Union’s Nevanlinna Prize, the Simons Investigator Award, a MacArthur Fellowship, and is a Fellow of the Royal Society. 

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

Representation, Modeling, and Optimization in Reinforcement Learning

Date and Time
Thursday, October 24, 2019 - 12:30pm to 1:30pm
Location
Friend Center Convocation Room
Type
CS Department Colloquium Series
Host
Sanjeev Arora

Sham Kakade
Reinforcement learning is now the dominant paradigm for how an agent learns to interact with the world. The approach has led to successes ranging across numerous domains, including game playing and robotics, and it holds much promise in new domains, from self-driving cars to interactive medical applications. 
Some of the central challenges are:
-Representational learning: does having a good representation of the environment permit efficient reinforcement learning?
-Modeling: should we explicitly build a model of our environment or, alternatively, should we directly learn how to act?
-Optimization: in practice, deployed algorithms often use local search heuristics. Can we provably understand when these approaches are effective and provide faster and more robust alternatives?
This talk will survey a number of results on these basic questions. Throughout, we will highlight the interplay of theory, algorithm design, and practice.

Bio: Sham Kakade is a Washington Research Foundation Data Science Chair, with a joint appointment in the Department of Computer Science and the Department of Statistics at the University of Washington, and is a co-director for the Algorithmic Foundations of Data Science Institute. He works on the mathematical foundations of machine learning and AI. Sham's thesis helped in laying the foundations of the PAC-MDP framework for reinforcement learning. With his collaborators, his additional contributions include: one of the first provably efficient policy search methods, Conservative Policy Iteration, for reinforcement learning; developing the mathematical foundations for the widely used linear bandit models and the Gaussian process bandit models; the tensor and spectral methodologies for provable estimation of latent variable models (applicable to mixture of Gaussians, HMMs, and LDA); the first sharp analysis of the perturbed gradient descent algorithm, along with the design and analysis of numerous other convex and non-convex algorithms. He is the recipient of the IBM Goldberg best paper award (in 2007) for contributions to fast nearest neighbor search and the best paper, INFORMS Revenue Management and Pricing Section Prize (2014). He has been program chair for COLT 2011.

Sham was an undergraduate at Caltech, where he studied physics and worked under the guidance of John Preskill in quantum computing. He then completed his Ph.D. in computational neuroscience at the Gatsby Unit at University College London, under the supervision of Peter Dayan. He was a postdoc at the Dept. of Computer Science, University of Pennsylvania , where he broadened his studies to include computational game theory and economics from the guidance of Michael Kearns. Sham has been a Principal Research Scientist at Microsoft Research, New England, an associate professor at the Department of Statistics, Wharton, UPenn, and an assistant professor at the Toyota Technological Institute at Chicago.

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

Designing and deploying social computing systems inside and outside the lab

Date and Time
Monday, October 21, 2019 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Adam Finkelstein

Andrés Monroy-Hernández
Social computing has permeated most aspects of our lives, from work to play. In recent years, however, social platforms have faced challenges ranging from labor concerns about crowd work to misinformation on social media. These and other challenges, along with the emergence of new technical platforms, create opportunities to re-imagine the future of the field. In this talk, I will discuss my research designing and deploying social computing systems to help people connect and collaborate to learn new skills, crowdsource news reporting, and delegate tasks to hybrid AI systems. First, I describe an online community I created to connect millions of young creators to learn computational thinking skills as they create, share, and remix games and animations on the web. Then I shift to how this work inspired my next set of systems to connect urban residents as they share news about their local communities. Finally, I will discuss a novel workplace productivity tool we created to enable information workers to delegate tasks to hybrid intelligence systems powered by humans and AI. I close by articulating the challenges and opportunities ahead for the field, and the ways my team is beginning to explore new systems that support more authentic and intimate connections, building on new technologies including wearables sensors and AR. 

Bio: Andrés Monroy-Hernández is a lead research scientist at Snap Inc, where he manages the human-computing interaction research team. He is also an affiliate professor at the University of Washington. His work focuses on the design and study of social computing technologies that help people connect and collaborate. Previously, he was at Microsoft Research for six years, where he led the research and development of Calendar.help, Microsoft’s first hybrid AI product. His research has received best paper awards at  CHI, CSCW, HCOMP, and ICWSM, and has been featured in The New York Times, CNN, Wired, BBC, and The Economist. Andrés was named one of the 35 Innovators under 35 by the MIT Technology Review magazine in Latin America, and one the most influential Latinos in Tech by CNET. Andrés holds master’s and Ph.D. from MIT, where he led the creation of the Scratch online community. He has a BS from Tec de Monterrey in México. More info at http://andresmh.com

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

Enabling Continuous Learning through Synaptic Plasticity in Hardware

Date and Time
Thursday, November 7, 2019 - 12:30pm to 1:30pm
Location
Engineering Quadrangle B205
Type
CS Department Colloquium Series
Host
Margaret Martonosi

Ever since modern computers were invented, the dream of creating artificial intelligence (AI) has captivated humanity. We are fortunate to live in an era when, thanks to deep learning (DL), computer programs have paralleled, and in many cases even surpassed human level accuracy in tasks like visual perception and speech synthesis. However, we are still far away from realizing general-purpose AI. The problem lies in the fact that the development of supervised learning based DL solutions today is mostly open loop.  A typical DL model is created by hand-tuning the deep neural network (DNN) topology by a team of experts over multiple iterations, followed by training over petabytes of labeled data. Once trained, the DNN provides high accuracy for the task at hand; if the task changes, however, the DNN model needs to be re-designed and re-trained before it can be deployed. A general-purpose AI system, in contrast, needs to have the ability to constantly interact with the environment and learn by adding and removing connections within the DNN autonomously, just like our brain does. This is known as synaptic plasticity.

In this talk we will present our research efforts towards enabling general-purpose AI leveraging plasticity in both the algorithm and hardware. First, we will present GeneSys (MICRO 2018), a HW-SW prototype of a closed loop learning system for continuously evolving the structure and weights of a DNN for the task at hand using genetic algorithms, providing 100-10000x higher performance and energy-efficiency over state-of-the-art embedded and desktop CPU and GPU systems. Next, we will present a DNN accelerator substrate called MAERI (ASPLOS 2018), built using light-weight, non-blocking, reconfigurable interconnects, that supports efficient mapping of regular and irregular DNNs with arbitrary dataflows, providing ~100% utilization of all compute units, resulting in 3X speedup and energy-efficiency over our prior work Eyeriss (ISSCC 2016). Finally, time permitting, we will describe our research in enabling rapid design-space exploration and prototyping of hardware accelerators using our dataflow DSL + cost-model called MAESTRO (MICRO 2019).

Bio: 
Tushar Krishna is an Assistant Professor in the School of Electrical and Computer Engineering at Georgia Tech. He also holds the ON Semiconductor Junior Professorship. He has a Ph.D. in Electrical Engineering and Computer Science from MIT (2014), a M.S.E in Electrical Engineering from Princeton University (2009), and a B.Tech in Electrical Engineering from the Indian Institute of Technology (IIT) Delhi (2007). Before joining Georgia Tech in 2015, Dr. Krishna spent a year as a post-doctoral researcher at Intel, Massachusetts.

Dr. Krishna’s research spans computer architecture, interconnection networks, networks-on-chip (NoC) and deep learning accelerators - with a focus on optimizing data movement in modern computing systems. Three of his papers have been selected for IEEE Micro’s Top Picks from Computer Architecture, one more received an honorable mention, and two have won best paper awards. He received the National Science Foundation (NSF) CRII award in 2018, and both a Google Faculty Award and a Facebook Faculty Award in 2019. He also received the “Class of 1940 Course Survey Teaching Effectiveness” Award from Georgia Tech in 2018.

Painting in More Dimensions

Date and Time
Monday, October 14, 2019 - 4:30pm to 5:30pm
Location
James Stewart Theater, Lewis Center for the Arts, 185 Nassau St.
Type
CS Department Colloquium Series
Speaker
Host
Adam Finkelstein

Alexa Meade painting half her face.
Artist Alexa Meade paints on the human body and three-dimensional spaces, creating the illusion that our reality is a two-dimensional painting. She will discuss her artistic process and collaborations with mathematicians, magicians, and theoretical physicists.

Her groundbreaking work has been exhibited around the world at the Grand Palais in Paris, the United Nations in New York, the Smithsonian National Portrait Gallery in Washington, DC and Shibuya Crossing in Tokyo. Her solo show on Rodeo Drive in Beverly Hills was attended by forty-thousand people. She has created large scale interactive installations at Coachella, Cannes Lions, and Art Basel. Alexa has been commissioned by BMW, Sony, Adidas, and the San Francisco Symphony Orchestra. She painted pop star Ariana Grande for her iconic “God is a Woman” music video, which has about 250 million views.

Alexa has lectured at TED, Apple, and Stanford and accepted an invitation to the White House under President Obama. She has been honored with the "Disruptive Innovation Award" by the Tribeca Film Festival and has been Artist-in-Residence at both Google and the Perimeter Institute for Theoretical Physics. InStyle has named Alexa among their "Badass Women."

FREE & OPEN TO THE PUBLIC

AI and Reshaping the Industry

Date and Time
Wednesday, April 3, 2019 - 4:30pm to 5:30pm
Location
Computer Science Large Auditorium (Room 104)
Type
CS Department Colloquium Series
Speaker
Corey Sanders 04, from Microsoft Azure

Corey Sanders
Artificial Intelligence has quickly moved from being a thing of science fiction to driving business forward across a wide variety of industries including retail, manufacturing, oil and gas, health, and financial services. It is becoming a ubiquitous component of normal customer IT, whether to improve end-customer experience or reduce back-end costs. In this talk, Corey will walk through some real and technical examples of AI and how it is reshaping every industry. Corey will also delve into the responsibility we all have in considering the impact of this technology as it continues to advance and improve. 

Bio:
Corey is the Corporate Vice President for Microsoft Solutions. He is responsible for the sales strategy and corporate technical sales team for Azure, M365, and Dynamics. Previously, Corey was the Head of Product for Azure Compute, responsible for product, strategy, and technical vision for core Azure compute services Azure Virtual Machines (both Windows and Linux), Containers, Kubernetes, OSS, Service Fabric, Event Grid, Service Bus, Management, Migration and serverless computing such as Functions.   Since joining Microsoft in 2004 Corey has led program management for multiple Azure services, was a developer in Windows Serviceability team owning areas across the networking and kernel stack for Windows and was the founder of Azure’s Infrastructure-as-a -service business. Corey graduated from Princeton in 2004 with a BSE in Computer Science.

Making Parallelism Pervasive with the Swarm Architecture

Date and Time
Tuesday, May 29, 2018 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Speaker
Host
Margaret Martonosi

Daniel Sanchez
With Moore's Law coming to an end, architects must find ways to sustain performance growth without technology scaling. The most promising path is to build highly parallel systems that harness thousands of simple and efficient cores. But this approach will require new techniques to make massive parallelism practical, as current multicores fall short of this goal: they squander most of the parallelism available in applications and are too hard to program.

I will present Swarm, a new architecture that successfully parallelizes algorithms that are often considered sequential and is much easier to program than conventional multicores. Swarm programs consist of tiny tasks, as small as tens of instructions each. Parallelism is implicit: all tasks follow a programmer-specified total or partial order, eliminating the correctness pitfalls of explicit synchronization (e.g., deadlock, data races, etc.). To scale, Swarm executes tasks speculatively and out of order, and efficiently speculates thousands of tasks ahead of the earliest active task to uncover enough parallelism.

Swarm builds on decades of work on speculative architectures and contributes new techniques to scale to large core counts, including a new execution model, speculation-aware hardware task management, selective aborts, and scalable ordered task commits. Swarm also incorporates new techniques to exploit locality and to harness nested parallelism, making parallel algorithms easy to compose and uncovering abundant parallelism in large applications.

Swarm accelerates challenging irregular applications from a broad set of domains, including graph analytics, machine learning, simulation, and databases. At 256 cores, Swarm is 53-561x faster than a single-core system, and outperforms state-of-the-art software-only parallel algorithms by one to two orders of magnitude. Besides achieving near-linear scalability, the resulting Swarm programs are almost as simple as their sequential counterparts, as they do not use explicit synchronization.

Bio:
Daniel Sanchez is an Associate Professor of Electrical Engineering and Computer Science at MIT. His research interests include parallel computer systems, scalable and efficient memory hierarchies, architectural support for parallelization, and architectures with quality-of-service guarantees. He earned a Ph.D. in Electrical Engineering from Stanford University in 2012 and received the NSF CAREER award in 2015.

Better Understanding of Efficient Dynamic Data Structures

Date and Time
Tuesday, May 8, 2018 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Ran Raz

Huacheng Yu
Data structures have applications and connections to algorithm design, database systems, streaming algorithms and other areas of computer science. Understanding what efficient data structures can do (and what they cannot do) is crucial to these applications. In this talk, I will present my work in analyzing efficient data structures and proving what they cannot accomplish. I will focus on the recent development in building new connections between dynamic data structures and communication complexity, as well as a new approach to analyzing dynamic data structures with Boolean outputs and super-logarithmic time.

Bio:
Huacheng Yu is a postdoctoral researcher in the Theory of Computing group at Harvard University. He obtained his PhD from Stanford University in 2017 under the supervision of Ryan Williams and Omer Reingold. He also holds a Bachelor's degree from Tsinghua University (2012). His primary research interests are data structure lower bounds. He also works in algorithm design and communication complexity.

Machine Learning Algorithms for Exploiting Spectral Structures of Biological Networks

Date and Time
Friday, April 27, 2018 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Host
Mona Singh

Bo Wang
Networks are ubiquitous in biology where they encode connectivity patterns at all scales of organization, from populations to a single cell.  How to extract and understand non-trivial topological features and structures inherent in the networks is critical to understanding interactions within complicated biological systems. In this talk, I will introduce recent developments of machine learning algorithms that exploit spectral structures of networks for a wide range of biological applications, ranging from single-cell analysis to function prediction on protein-protein interaction networks. Specifically, I will first present a new method named SIMLR, combining both low-rank spectral regularization and multiple- kernel learning, to construct cell networks for sparse noisy single-cell RNA-seq data. The learned cell networks will enable effective dimension reduction, clustering and visualization. Second, I will discuss a novel method, Network Enhancement (NE), that aims to de-noise complex networks such as protein-protein interaction networks without corrupting spectral structures of the networks, therefore improving function prediction accuracy. Last, I will also briefly introduce recent advances where deep convolutional neural network is applied on biological networks (e.g., drug-target network) as a first-order spectral approximation of network structures.

Bio: 
Bo Wang is a recent PhD graduate in Computer Science at Stanford University, an IEEE and CVF Fellow and an NSF Graduate Research Fellow.  His research focuses on machine learning (particularly deep learning) on many applications in computer vision (e.g., image segmentation, object detection) and computational biology (e.g., single-cell analysis, integrative cancer subtyping). Prior to Stanford, he received his master degree at University of Toronto, majoring in numerical analysis. 

Follow us: Facebook Twitter Linkedin