Quick links

Colloquium

Qualcomm: Past, Present and Future of 5G Millimeter Wave

Date and Time
Thursday, June 17, 2021 - 1:30pm to 2:30pm
Location
Zoom Webinar (off campus)
Type
Colloquium
Speaker
Ozge Koymen, from Qualcomm
Host

Please register here


Abstract: After more than a decade of advanced R&D and ecosystem trials, commercial 5G mmWave service is now available in more than 55 U.S. cities and 160 areas in Japan. Looking forward, we expect 5G mmWave to expand into new geographic regions across the globe, and new device types and tiers will emerge to take full advantage of mmWave’s virtually unlimited capacity. On the research front, Qualcomm continues to push the technology boundaries of mmWave for 5G/6G by bringing new capabilities and enhancements. Join this seminar to:

  • Review the key technical achievements and milestones at Qualcomm that enabled the commercialization of 5G mmWave systems.
  • See our vision for 5G mmWave and the new opportunities it poises to bring for the broader ecosystem.
  • Learn about the mmWave capabilities and enhancements coming in 3GPP Release -17 and beyond (e.g. Integrated Access and Backhaul, 60GHz and beyond, IIOT, etc.).
  • Track the latest update on the global commercial rollout of 5G mmWave networks and devices.

Bio: Ozge Koymen is a Senior Director of Technology at Qualcomm Technologies, Inc. where he has been since 2006. He has led the 5G millimeter-wave program within Qualcomm R&D since early 2015, from early conceptual evaluation to commercial deployment. His previous areas as a technical contributor includes Wireless Backhaul, Small Cells, LTE-D, LTE and UMB. Prior to Qualcomm, he was a member of Flarion Technologies developing a pioneering OFDMA cellular system, Flash-OFDM, during 2003-2006. His earlier work experience includes full-time and consulting work for Impinj, Inc. (2000-2003) and TRW (1996-2000). He received the B.S. in Electrical and Computer Engineering from Carnegie Mellon University in 1996 and the M.S. and Ph.D. in Electrical Engineering from Stanford University in 1997 and 2003, respectively.

JAX: Accelerated machine learning research via composable function transformations in Python

Date and Time
Tuesday, October 15, 2019 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Host
Ryan Adams

Dougal Maclaurin
JAX is a system for high-performance machine learning research. It offers the familiarity of Python+NumPy and the speed of hardware accelerators, and it enables the definition and the composition of function transformations useful for machine learning programs. In particular, these transformations include automatic differentiation, automatic batching, end-to-end-compilation (via XLA), and parallelizing over multiple accelerators. They are the key to JAX's power and to its relative simplicity.

JAX had its initial open-source release in December 2018 (https://github.com/google/jax). It is currently being used by several groups of researchers for a wide range of advanced applications, from studying spectra of neural networks, to probabilistic programming and Monte Carlo methods, and scientific applications in physics and biology. Users appreciate JAX most of all for its ease of use and flexibility.

Bio: Dougal Maclaurin is a research scientist at Google. He works on programming languages and systems for machine learning, particularly the Python library JAX. He started Autograd, a system for automatic differentiation in Python, which has inspired the design of several systems, including PyTorch, MinPy, Torch Autograd and Julia Autograd. He is a co-founder of Day Zero Diagnostics, a startup developing a sequencing-based diagnostic for drug-resistant infections. He received his Ph.D. from Harvard in 2016, working with Ryan Adams on the development of methods for machine learning. His work on scalable MCMC, "Firefly Monte Carlo", was recognized with the Best Paper award at UAI 2014.


Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

Challenges in Cloud Networking

Date and Time
Wednesday, September 18, 2019 - 1:30pm to 2:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Muhammad Jehangir Amjad, from Google
Host
Jennifer Rexford

Muhammad Jehangir Amjad
Google's products, e.g. Search, YouTube, Gmail, etc. are used by billions of people the world over. Google Cloud hosts some of the most popular services in the world.  Building systems that scale to support all these applications is among the greatest challenges at the company. Underlying all these systems is the network which must enable low-latency, high throughput, high capacity, high availability and secure access to compute and storage. While Google has been enormously successful in achieving these goals, cloud networking faces exciting new challenges today. With the demise of Moore's Law and explosive data growth, the implications for the performance, reliability, predictability and cost efficiency on networking, from hardware to communication protocols, will be profound.  

This talk will focus on highlighting some of the challenges in networking alluded to above. Additionally, we will discuss the current and future challenges in network telemetry systems and Google's approach to overcoming these challenges via statistical inference which allows us to estimate that which cannot be measured.

Bio:
Muhammad Jehangir Amjad is a Software Engineer in the Network Infrastructure team at Google working on inference and statistical learning on data produced by network telemetry systems. He joined Google from MIT where he has an appointment as a Lecturer of Machine Learning in CSAIL. Jehangir received his PhD from the Operations Research Center (ORC) and Laboratory of Information and Decision Systems (LIDS) at MIT, under the supervision of Prof Devavrat Shah. He received his BSE in Electrical Engineering from Princeton University.

To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

Security and Privacy Guarantees in Machine Learning with Differential Privacy

Date and Time
Tuesday, September 17, 2019 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Host
Amit Levy

Machine learning (ML) is becoming a critical foundation for how we construct the code driving our applications, cars, and life-changing financial decisions.  Yet, it is often brittle and unstable, making decisions that are hard to understand and can be exploited.  As one example, tiny changes to an input can cause dramatic changes in predictions; this results in decisions that surprise, appear unfair, or enable attack vectors such as adversarial examples.  As another example, models trained on users' data have been shown to encode not only general trends from large datasets but also very specific, personal information from these datasets, such as social security numbers and credit card numbers from emails; this threatens to expose users' secrets through ML predictions or parameters.  Over the years, researchers have proposed various approaches to address these rather distinct security, privacy, and transparency challenges.  Most of the work has been best effort, which is insufficient if ML is to become a rigorous basis for how we construct our code.

This talk positions differential privacy (DP) -- a theory developed by the privacy community -- as a versatile foundation for building into ML much-needed guarantees of not only privacy but also of security, stability, and transparency.  As supporting evidence, I first present PixelDP, a scalable certified defense against adversarial examples that leverages DP theory to guarantee a level of robustness against this attack.  I then present Sage, a DP ML platform that bounds the leakage of personal secrets through ML models while addressing some of the most pressing challenges of DP, such as the "running out of privacy budget" problem.  Both PixelDP and Sage are designed from a pragmatic systems perspective and illustrate that DP theory is powerful but requires adaptation to achieve practical guarantees for ML workloads.

Bio:
Roxana Geambasu is an Associate Professor of Computer Science at Columbia University and a member of Columbia's Data Sciences Institute. She joined Columbia in Fall 2011 after finishing her Ph.D. at the University of Washington.  For her work in cloud and mobile data privacy, she received: an Alfred P. Sloan Faculty Fellowship, an NSF CAREER award, a Microsoft Research Faculty Fellowship, several Google Faculty awards, a "Brilliant 10" Popular Science nomination, the Honorable Mention for the 2013 inaugural Dennis M. Ritchie Doctoral Dissertation Award, a William Chan Dissertation Award, two best paper awards at top systems conferences, and the first Google Ph.D. Fellowship in Cloud Computing.

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

Farewell to Servers: Software and Hardware Approaches towards Datacenter Resource Disaggregation

Date and Time
Tuesday, May 21, 2019 - 1:30pm to 2:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Host
Amit Levy

Yiying Zhang
Datacenters have been using the "monolithic" server model for decades, where each server hosts a set of hardware devices like CPU and DRAM on a motherboard and runs an OS on top to manage the hardware resources. This monolithic server model fundamentally restricts datacenters from achieving efficient resource packing, hardware rightsizing, and great heterogeneity. Recent hardware and application trends such as serverless computing further call for a rethinking of the long-standing server-centric model. My answer is to "disaggregate" monolithic servers into network-attached hardware components that host different hardware resources and offer different functionalities (e.g., a processor component for computation, a memory component for fast data accesses). I believe that after evolving from physical (DC-1.0) to virtual (DC-2.0), datacenters should evolve further into a disaggregated one (DC-3.0), where hardware resources can be allocated and scaled to the exact amount that applications use and can be individually managed and customized for different application needs. By not having servers, DC-3.0 disrupts designs and technologies in almost every layer in today's datacenters, from hardware and networking to OS and applications. My lab undertook pioneering efforts in building an end-to-end solution for DC-3.0 with a new OS, a new hardware platform, and a new network system.

This talk will focus on two systems that are central to the design of DC-3.0: 1) LegoOS, a new distributed operating system designed for managing disaggregated resources. LegoOS splits OS functionalities into different units, each running at a hardware component and managing the component's hardware resources. LegoOS enables the disaggregation and customization of OS functionalities, a significant step towards building DC-3.0's software infrastructure. 2) LegoFPGA, a new approach of using FPGA to efficiently manage and virtualize hardware resources. LegoFPGA offers a solution to co-design application, OS, and hardware functionalities and customize them for different hardware resources and application domains, an important step towards building DC-3.0's hardware infrastructure. With LegoOS and LegoFPGA, we demonstrate that separating core OS and hardware functionalities is not only feasible but can largely improve performance per dollar over the current datacenter monolithic server model.

Bio:
Yiying Zhang is an assistant professor in the School of Electrical and Computer Engineering at Purdue University. Her research interests span operating systems, distributed systems, computer architecture, and datacenter networking. She also works on the intersection of systems and programming language, security, and AI/ML. She won an OSDI best paper award in 2018 and an NSF CAREER award in 2019. Yiying’s lab is among the few groups in the world now that build new OSes and full-stack, cross-layer systems. Yiying received her Ph.D. from the Department of Computer Sciences at the University of Wisconsin-Madison under the supervision of Andrea and Remzi Arpaci-Dusseau and worked as a postdoctoral scholar at the University of California, San Diego before joining Purdue.

To request accommodations for a disability, please contact Emily Lawrence at emilyl@cs.princeton.edu, at least one week prior to the event.

System and Architecture Design for Safe and Reliable Autonomous Robotic Applications

Date and Time
Tuesday, May 14, 2019 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Host
Margaret Martonosi

Jishen Zhao
The rapid development of smart technology in edge computing systems has paved the way for us to embrace the technology movement of self-driving cars and autonomous service robots. To enable the wide adoption of these autonomous robotic applications, reliability is one of fundamental goals of computing system and architecture design. In this talk, I will present our recent exploration of safe and reliable system and architecture design for autonomous robotic applications. I will start by presenting an architecture design of supporting fast system recovery with persistent memory at low performance cost. To evaluate and guide our system design, I will introduce our safety model and architecture design strategies for self-driving cars, based on our field study of running real industrial Level-4 autonomous driving fleets. Finally, I will describe a Linux-container-based resource management framework design to improve reliability and safety of self-driving cars and service robots.

Bio: Jishen Zhao is an Assistant Professor in the Computer Science and Engineering Department at University of California, San Diego. Her research spans and stretches the boundary between computer architecture and system software, with a particular emphasis on memory and storage systems, domain-specific acceleration, and system reliability. Her research is driven by both emerging technologies (e.g., nonvolatile memories, 3D-stacked memory) and modern applications (e.g., smart home and autonomous robotic systems, deep learning, and big-data analytics). Before joining UCSD, she was an Assistant Professor at UC Santa Cruz, and a research scientist at HP Labs before joining UCSC. She is a recipient of NSF CAREER award and a MICRO best paper honorable mention award.

Lunch will be available at 12:00pm
To request accommodations for a disability, please contact Emily Lawrence at emilyl@cs.princeton.edu, at least one week prior to the event.

CacheLib - Unifying & Abstracting HW for caching at Facebook

Date and Time
Friday, May 3, 2019 - 12:30pm to 1:30pm
Location
Computer Science 302
Type
Colloquium
Speaker
Michael Uhlar and Sathya Gunasekar, from Facebook
Host
Wyatt Lloyd

In order to operate with high efficiency, Facebook’s infrastructure relies on caching in many different backend services. These services place very different demands on their caches, e.g., in terms of working set sizes, access patterns, and throughput requirements.  Historically, each service used a different cache implementation, leading to inefficiency, duplicated code and  effort. 

CacheLib is an embedded caching engine, which addresses this requirement with a unified API for building a cache implementation across many HW mediums. CacheLib transparently combines volatile and non-volatile storage in a single caching abstraction.  To meet the varied demands, CacheLib successfully provides a flexible, high-performance solution for many different services at Facebook.  In this talk, we describe CacheLib’s design, challenges, and several lessons learned.

To request accommodations for a disability, please contact Emily Lawrence at emilyl@cs.princeton.edu, at least one week prior to the event.

Earable Computers : Ear-worn Systems for Healthcare, HCI, BCI, and Brain Stimulations

Date and Time
Thursday, February 7, 2019 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Host
Jennifer Rexford

Tam Vu
This talk introduces the concept of "Earable computers", small computing and actuating devices that are worn inside, behind, around, or on user's ears. Earable sensing and actuation are motivated from the fact that human ears are relatively close to the sources of many important physiological signals such as the brain, eyes, facial muscles, heart, core body temperature, and more. Therefore, placing the sensors and associated stimulators inside the ear canals or behind the ears could open up a wide range of applications from improving cognitive function, keeping truck drivers from falling asleep while driving, extending attention span, to quantifying pain and suffering, reducing opioid use, suppressing seizure, just to name a few. This talk will discuss the opportunities that earable systems could bring and system challenges that need to be addressed to unleash its potentials. I will share our experience and lessons learned through realizing such systems in the context of human computer interaction, brain computer interaction, and healthcare.  

Bio: 
Tam Vu is an Assistant Professor of Computer Science Department at University of Colorado, Boulder. He directs Mobile and Networked Systems (MNS) Lab at the university, where he and his team conduct system research in the areas of wearable and mobile systems including mobile healthcare, mobile security, cyber physical system, and wireless sensing. His research has been recognized with a NSF CAREER award, two Google Faculty Awards, ten best paper awards, best paper nomination, and research highlights in flagship venues in mobile system research including MobiCom, MobiSys, and SenSys. He is also actively pushing his research outcomes to practice through technology transfer activities with 17 patents filed and leading two start-ups that he co-founded to commercialize them.

To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

Hardware is the New Software: Finding Exploitable Bugs in Hardware Designs

Date and Time
Monday, February 4, 2019 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Host
Margaret Martonosi

Cynthia Sturton
Bugs in hardware designs can create vulnerabilities that open the machine to malicious exploit. Despite mature functional validation tools and new research in designing secure hardware, the question of how to find and recognize those bugs remains open. My students and I have developed two tools in response to this question. The first is a security specification miner; it semi-automatically identifies security-critical properties of a design specified at the register transfer level. The second tool, Coppelia, is a symbolic execution engine that explores a hardware design and generates complete exploits for the security bugs it finds. We use Coppelia and our set of generated security properties to find new bugs in the open-source RISC-V and OR1k CPU architectures.

Bio:
Cynthia Sturton is an Assistant Professor and Peter Thacher Grauer Fellow at the University of North Carolina at Chapel Hill. She leads the Hardware Security @ UNC research group to investigate the use of static and dynamic analysis to protect against vulnerable hardware designs. Her research is funded by several National Science Foundation awards, the Semiconductor Research Corporation, Intel, a Junior Faculty Development Award from the University of North Carolina, and a Google Faculty Research Award. She was recently awarded the Computer Science Departmental Teaching Award at the University of North Carolina. Sturton received her B.S.E. from Arizona State University and her M.S. and Ph.D. from the University of California, Berkeley. 

Lunch for talk attendees will be available at 12:00pm

***CANCELED*** Make Your Database Dream of Electric Sheep: Designing for Autonomous Operation

Date and Time
Friday, November 16, 2018 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium

Andy Pavlo
***DUE TO WEATHER THIS TALK HAS BEEN CANCELED***

In the last 20 years, researchers and vendors have built advisory tools to assist DBAs in tuning and physical design. Most of this previous work is incomplete because they require humans to make the final decisions about any database changes and are reactionary measures that fix problems after they occur. What is needed for a "self-driving" DBMS are components that are designed for autonomous operation. This will enable new optimizations that are not possible today because the complexity of managing these systems has surpassed the abilities of humans.

In this talk, I present the core design principles of an autonomous DBMS. These are necessary to support ample data collection, fast state changes, and accurate reward observations. I will discuss techniques on how to build a new autonomous DBMS or the steps needed to retrofit an existing one to enable automated management. Our work is based on our experiences at CMU from developing an automatic tuning service (OtterTune) and our self-driving DBMS (Peloton).

Bio:
Andy Pavlo is an Assistant Professor of Databaseology in the Computer Science Department at Carnegie Mellon University. He also used to raise clams.

Follow us: Facebook Twitter Linkedin