Quick links

Seminar

Scaling AI Sustainably: An Uncharted Territory

Date and Time
Thursday, April 25, 2024 - 1:30pm to 2:30pm
Location
Engineering Quadrangle B205
Type
Seminar
Speaker
Carole-Jean Wu, from Meta
Host
Margaret Martonosi
Carole-Jean Wu

The past 50 years has seen a dramatic increase in the amount of compute per person, in particular, those enabled by AI. Despite the positive societal benefits, AI technologies come with significant environmental implications. I will talk about the scaling trend and the carbon footprint of AI computing by examining the model development cycle, spanning data, algorithms, and system hardware. At the same time, we will consider the life cycle of system hardware from the perspective of hardware architectures and manufacturing technologies. I will highlight key efficiency optimization opportunities for cutting-edge AI technologies, from deep learning recommendation models to multi-modal generative AI tasks. To scale AI sustainably, we need to make AI and computing, more broadly, efficient and flexible. We must also go beyond efficiency and optimize across the life cycle of computing infrastructures, from hardware manufacturing to datacenter operation and end-of-life processing for the hardware. Based on the industry experience and lessons learned, my talk will conclude with important development and research directions to advance the field of computing in an environmentally-responsible and sustainable manner.

Bio: Carole-Jean Wu is a Director of Research Science at Meta. She is a founding member and a Vice President of MLCommons – a non-profit organization that aims to accelerate machine learning for the benefits of all. Dr. Wu also serves on the MLCommons Board as a Director, chaired the MLPerf Recommendation Benchmark Advisory Board, and co-chaired for MLPerf Inference. Prior to Meta/Facebook, She was a professor at ASU. She earned her M.A. and Ph.D. from Princeton and B.Sc. from Cornell.

Dr. Wu's expertise sits at the intersection of computer architecture and machine learning. Her work spans across datacenter infrastructures and edge systems, such as developing energy- and memory-efficient systems and microarchitectures, optimizing systems for machine learning execution at-scale, and designing learning-based approaches for system design and optimization. Dr. Wu's work has been recognized with several awards, including IEEE Micro Top Picks and ACM/IEEE Best Paper Awards. She was the Program Co-Chair of the Conference on Machine Learning and Systems (MLSys) in 2022, the Program Chair of the IEEE International Symposium on Workload Characterization (IISWC) in 2018, and the Editor for the IEEE MICRO Special Issue on Environmentally Sustainable Computing. She currently serves on the ACM SIGARCH/SIGMICRO CARES committee.


Cosponsored by the Department of Electrical and Computer Engineering and the Department of Computer Science

To request accommodations for a disability please contact Donna Ghilino, dg3548@princeton.edu, at least one week prior to the event.

Computing Quantum Excited States with Deep Neural Networks

Date and Time
Friday, April 5, 2024 - 4:30pm to 5:30pm
Location
Jadwin Hall A10
Type
Seminar
Speaker

David Pfau
Excited states of quantum systems are relevant for photochemistry, quantum dots, semiconductors and more, but remain extremely challenging to calculate by conventional computational methods. In recent years tools from machine learning have found useful application in computational quantum mechanics, especially in making ground state calculations much more accurate. In this talk I will present a new algorithm for estimating the lowest excited states of a quantum system which works extremely well with deep neural networks. The method has no free parameters and requires no explicit orthogonalization of the different states, instead transforming the problem of finding excited states of a given system into that of finding the ground state of an expanded system. Expected values of arbitrary observables can be calculated including off-diagonal expectations between different states such as the transition dipole moment. We show that by combining this method with the FermiNet and Psiformer Ansatze we can accurately recover vertical excitation energies and oscillator strengths on molecules as large as benzene. Beyond the examples on molecules presented here we expect this technique will be of great interest for applications to atomic, nuclear, and condensed matter physics.

Bio: David Pfau is a staff research scientist at Google DeepMind and a visiting professor at Imperial College London in the Department of Physics, where he supervises work on applications of deep learning to computational quantum mechanics. His research interests span artificial intelligence, machine learning and scientific computing.

Prior to joining DeepMind, he was a PhD student at the Center for Theoretical Neuroscience at Columbia, where I worked on algorithms for analyzing and understanding high-dimensional data from neural recordings with Liam Paninski and nonparametric Bayesian methods for predicting time series data with Frank Wood. He also had a stint as a research assistant in the group of Mike DeWeese at UC Berkeley, jointly between Physics and the Redwood Center for Theoretical Neuroscience. His current research interests include applications of machine learning to computational physics and connections between differential geometry and unsupervised learning.

The Relational Bottleneck and the Emergence of Cognitive Abstractions

Date and Time
Monday, March 11, 2024 - 12:00pm to 1:00pm
Location
Princeton Neuroscience Institute A32
Type
Seminar
Host
Nathaniel Daw

PNI/CS Special Seminar


Taylor Webb
Human cognition is characterized by a remarkable ability to transcend the specifics of limited experience to entertain highly general, abstract ideas. Efforts to explain this capacity have long fueled debates between proponents of symbol systems and statistical approaches. In this talk, I will present an approach that suggests a novel reconciliation to this long-standing debate, by exploiting an inductive bias that I term the relational bottleneck. This approach imbues neural networks with key properties of traditional symbol systems, thereby enabling the data-efficient acquisition of cognitive abstractions, without the need for pre-specified symbolic representations. I will also discuss studies of perceptual decision confidence that illustrate the need to ground cognitive theories in the statistics of real-world data, and present evidence for the presence of emergent reasoning capabilities in large-scale deep neural networks (albeit requiring far more training data than is developmentally plausible). Finally, I will discuss the relationship of the relational bottleneck to other inductive biases, such as object-centric visual processing, and consider the potential mechanisms through which this approach may be implemented in the human brain.

Bio: Taylor Webb received his PhD in Cognitive Psychology and Neuroscience from Princeton University, where he studied with Michael Graziano and Jonathan Cohen. He is now a postdoctoral research fellow in the Psychology Department at UCLA, working with Hongjing Lu, Keith Holyoak, and Hakwan Lau. His research is focused on the question of how the brain extracts structured, abstract representations from noisy, high-dimensional perceptual inputs, and uses these representations to achieve intelligent behavior. To better understand these processes, his work exploits a bidirectional interaction between cognitive science and artificial intelligence. This involves both the use of techniques from AI to build models of higher-order cognitive processes (e.g., metacognition and analogical reasoning) that are grounded in realistic perceptual inputs (e.g., images and natural language), and the development of novel AI systems that take inspiration from cognitive science and neuroscience to achieve more human-like learning and reasoning.


To request accommodations for a disability please contact Yi Liu, irene.yi.liu@princeton.edu, at least one week prior to the event.

Principles of learning in distributed neural networks

Date and Time
Thursday, February 22, 2024 - 3:30pm to 4:30pm
Location
Princeton Neuroscience Institute A32
Type
Seminar
Host
Nathaniel Daw

PNI/CS Special Seminar


Andrew Saxe
The brain is an unparalleled learning machine, yet the principles that govern learning in the brain remain unclear. In this talk I will suggest that depth–the serial propagation of signals–may be a key principle sculpting learning dynamics in the brain and mind. To understand several consequences of depth, I will present mathematical analyses of the nonlinear dynamics of learning in a variety of simple solvable deep network models. Building from this theoretical work, I will trace implications for the development of human semantic cognition, showing that deep but not shallow networks exhibit hierarchical differentiation of concepts through rapid developmental transitions and ubiquitous semantic illusions between such transitions. Finally, turning to rodent systems neuroscience, I will show that deep network dynamics can account for individually variable yet systematic transitions in strategy as mice learn a visual detection task over several weeks. Together, these results provide analytic insight into how the statistics of an environment can interact with nonlinear deep learning dynamics to structure evolving neural representations and behavior over learning.

Bio: Andrew Saxe is a Professorial Research Fellow at the Gatsby Computational Neuroscience Unit and Sainsbury Wellcome Centre at UCL. He was previously an Associate Professor in the Department of Experimental Psychology at the University of Oxford. His research focuses on the theory of deep learning and its applications to phenomena in neuroscience and psychology. His work has been recognized by the Robert J. Glushko Dissertation Prize from the Cognitive Science Society, the Blavatnik UK Finalist Award in Life Sciences, and a Schmidt Science Polymath award. He is also a CIFAR Azrieli Global Scholar in the CIFAR Learning in Machines & Brains programme.


To request accommodations for a disability please contact Yi Liu, irene.yi.liu@princeton.edu, at least one week prior to the event.

ECE Seminar - Sustainable Computing: Implications, Challenges, Opportunities

Date and Time
Friday, November 10, 2023 - 2:00pm to 3:30pm
Location
Friend Center 004
Type
Seminar
Speaker
Udit Gupta, from Cornell Tech
Host
David Wentzlaff, ECE

Udit Gupta
Computing has a substantial environmental impact, with far-reaching consequences. Information and Communication Technologies (ICT) alone contribute to 4% of global carbon emissions, a figure equivalent to the aviation industry's carbon footprint. This impact is poised to increase as the demand for computing continues to surge, particularly at the edge and within massive hyperscale data centers. Creating environmentally sustainable computing solutions presents unique challenges that go beyond energy efficiency optimization. It necessitates a holistic approach that considers various factors. For example, sustainable computing devices must account for the carbon emissions generated during the manufacturing process, the influence of renewable energy sources that fluctuate by location and time on a system's operational emissions, and the intricate interplay between embodied and operational emissions. In this presentation, we will delve into recent breakthroughs and the research challenges we confront as we strive to transition to environmentally responsible computing systems. With a primary focus on the carbon footprint of computing, we will explore recent initiatives aimed at characterizing and comprehending the environmental impact of computing. We will also introduce new carbon accounting models and frameworks, highlighting their application in pertinent domains, such as AI, to promote the development of sustainable AI solutions.

Bio: Udit Gupta is an Assistant Professor at Cornell Tech, where his research is dedicated to pioneering the development of cutting-edge and ethically responsible AI platforms through the innovation of novel computer systems and hardware. His recent endeavors are centered around the design and optimization of data center-based, deep learning-driven personalized recommendation systems and championing eco-friendly computing by meticulously considering the ecological implications associated with the entire hardware life cycle.

Udit's groundbreaking research has been tested and validated within production data centers at a large scale but has also found its place in established benchmarks and infrastructures that are widely embraced by the research community. His contributions were acknowledged with an honorable mention as an IEEE MICRO Top Picks selection in 2020 and further distinguished with an IEEE MICRO Top Picks award in 2021 and 2022. His work earned nominations for the Best Paper awards at PACT 2019 and DAC 2018. Furthermore, his doctoral research received the prestigious SIGARCH Outstanding Ph.D. Dissertation award honorable mention and the SIGMICRO Outstanding Ph.D. Dissertation award honorable mention.

ECE Seminar: Towards Sustainable Artificial Intelligence and Datacenters

Date and Time
Friday, November 3, 2023 - 11:00am to 12:30pm
Location
Bowen Hall 222
Type
Seminar
Speaker
Benjamin C. Lee, from University of Pennsylvania
Host
David Wentzlaff, ECE

Benjamin Lee
As the impact of artificial intelligence (AI) continues to proliferate, computer architects must assess and mitigate its environmental impact. This talk will survey strategies for reducing the carbon footprint of AI computation and datacenter infrastructure, drawing on data and experiences from industrial, hyperscale systems. First, we analyze the embodied and operational carbon implications of super-linear AI growth. Second, we re-think datacenter infrastructure and define a solution space for carbon-free computation with renewable energy, utility-scale batteries, and job scheduling. Finally, we develop strategies for datacenter demand response, incentivizing both batch and real-time workloads to modulate power usage in ways that reflect their performance costs. In summary, the talk provides a broad perspective on sustainable computing and outlines the many remaining directions for future work.

Bio: Benjamin C. Lee is a Professor of Electrical and Systems Engineering and of Computer and Information Science at the University of Pennsylvania. Dr. Lee’s research focuses on computer architecture, energy efficiency, environmental sustainability, and security. He builds interdisciplinary links to machine learning and algorithmic game theory to better design and manage computer systems. His research has been recognized by IEEE Micro Top Picks, Communications of the ACM Research Highlights, as well as publication honors from the ASPLOS, HPCA, MICRO, and SC conferences. Dr. Lee was an Assistant and then Associate Professor at Duke University. He received his post-doctorate in Electrical Engineering at Stanford University, Ph.D. in Computer Science from Harvard University, and B.S. in Electrical Engineering and Computer Science from the University of California at Berkeley. He has also held visiting positions at Meta AI, Microsoft Research, Intel Labs, and Lawrence Livermore National Lab. Dr. Lee received the NSF Computing Innovation Fellowship, NSF CAREER Award, and Google Faculty Research Award. He is an ACM Distinguished Scientist and IEEE Senior Member.


This seminar is hosted by the department of Electrical and Computer Engineering.

DeCenter Seminar: Blockchain does not operate in a vacuum: an Internet perspective

Date and Time
Wednesday, October 11, 2023 - 4:30pm to 5:30pm
Location
Friend Center Convocation Room
Type
Seminar
Speaker
Maria Apostolaki, from Princeton University
Host

Please REGISTER HERE
Reception following talk.


While public blockchains like Bitcoin and Ethereum are celebrated for their security, and decentralized nature, they're built on the insecure and best-effort  structures of the Internet. In this talk, we dive into the often overlooked issues arising from the Internet layer, uncovering how centralization and certain assumptions can make blockchains vulnerable. Bad actors can exploit these weak spots, to hinder blockchain functionality, compromise security and user privacy. Beyond these challenges, we'll also spotlight the growing centralization concerns and their implications. But it's not all cautionary tales! We'll share deployable solutions and strategies to fortify blockchains against these threats.

Bio: Maria Apostolaki, assistant professor of electrical and computer engineering, joined the Princeton faculty in August 2022 after a one-year postdoctoral position at Carnegie Mellon University. She earned her Ph.D. from ETH Zurich in 2021. Her research draws from networking, security, blockchain, and machine learning. Overall, her goal is to design and build networked systems that are secure, reliable and performant. She was named a Rising Star in Computer Networking and Communications by N2Women. She is a recipient of the Google Research Scholar Award recognizing her work on improving network monitoring.
 

DeCenter Seminar: Non-Fungible Tokens (NFTs): Two Market Challenges

Date and Time
Thursday, May 4, 2023 - 12:30pm to 1:20pm
Location
Computer Science Small Auditorium (Room 105)
Type
Seminar
Speaker
Ari Juels, from Cornell Tech

Non-fungible tokens (NFTs) have leapt to prominence as a technical and social phenomenon. Collections such as the Bored Ape Yacht Club (BAYC) remain popular and have active communities even in the midst of the current “crypto winter.” The dynamics around NFT sales, however, leave much to be desired. I’ll discuss two problems in NFT markets. The first relates to “drops,” i.e., bulk sales, of popular NFTs today. It’s hard to achieve fairness in drops: Bots often snap up new NFT offerings and corner the market. The second problem relates to royalty payments for creators, one of the most attractive features of NFTs. Royalties have become a battleground among NFT markets, which often leads to creators being deprived of royalties or market access. I’ll discuss candidate solutions to these two problems using privacy-preserving oracles systems and a new cryptographic concept called Complete Knowledge.  

Bio: Ari Juels is the Weill Family Foundation and Joan and Sanford I. Weill Professor in the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion and a Computer Science faculty member at Cornell University. He is a Co-Director of the Initiative for CryptoCurrencies and Contracts (IC3). He is also Chief Scientist at Chainlink Labs.


Lunch will be available beginning at noon.
A recording will be available on the DeCenter website after the seminar.

Nand to Tetris: Applied Computer Science from the Ground Up

Date and Time
Thursday, May 11, 2023 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Seminar
Speaker
Shimon Schocken, from Reichman University, Israel

Shimon Schocken
I'll present COS205, a course that I taught at Princeton during the Spring 2023 semester. The course synthesizes many abstractions, algorithms, and data structures learned in core CS courses, and makes them concrete by building a complete computer system from the ground up. The methodology is based on guiding students through a set of 12 hands-on projects that gradually construct and unit-test a basic hardware platform and a modern software hierarchy, yielding a simple yet surprisingly powerful computer system. The hardware projects are done in a hardware description language and a hardware simulator supplied by us. The software projects (assembler, virtual machine, and a compiler for an object-based, Java-like language) can be developed in any language, using API's and test programs supplied by us. We also build a basic OS. The result is a general-purpose computer system, simulated on the student's PC. Students who wish to do so can build the computer physically, using FPGA. All the course materials are freely available in open source at www.nand2tetris.org (joint work with Noam Nisan, Hebrew University).


Bio: Shimon Schocken, William R. Kenan Jr. Visiting Professor for Distinguished Teaching
School of Engineering, Princeton University (on leave from Reichman University, Israel)

Shimon Schocken is founding-dean of the Efi Arazi School of Computer Science at Reichman University. He is also co-founder of the Google-Reichman School of Technology, an academic bootcamp dedicated to training young men and women from underprivileged backgrounds, and Matific, a company that develops computer games that teach elementary school mathematics. He was a tenured professor at NYU, and chairman of the computer science committee at Israel's Ministry of Education.


To request accommodations for a disability please contact Emily Lawrence, emilyl@cs.princeton.edu, at least one week prior to the event.

DeCenter Seminar - Internet-Scale Consensus In The Blockchain Era

Date and Time
Monday, March 27, 2023 - 3:30pm to 4:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Seminar
Speaker
Joachim Matthias Neu, from Stanford University
Host
Pramod Viswanath

Joachim Neu
Blockchains have ignited interest in Internet-scale consensus as a fundamental building block for decentralized applications and services, which in turn promise more egalitarian access and improved robustness to faults and abuse. While consensus has been studied in distributed systems for decades, Internet-scale consensus requires a fundamental rethinking of models, desiderata, and protocols. Participants are no longer a handful of computers in one company's data centers, but numerous mistrustful entities distributed across the Internet. I will discuss two examples of challenges and solutions for this new setting: (1) Ethereum, the second largest blockchain, aims to strengthen consensus liveness under open participation where parties come and go at will, and to strengthen safety to enable accountability in case of any safety violation. However, we show that no traditional single-ledger protocol can satisfy both strengthened properties simultaneously. To resolve this dilemma, we develop the multi-ledger consensus paradigm that is now the security design-specification for Ethereum. A by-product of this work are attacks on Ethereum's consensus protocol that prompted design changes. (2) Traditional network models do not capture rate constraints on communication and processing, leaving popular "provably secure" protocols vulnerable to attack. We show via a new queuing-based model how to schedule message handling securely. Our policies are simple enough to be forward-deployed at Internet service providers via a system that can also protect traffic of applications beyond blockchains.

Bio: Joachim Neu is a PhD candidate at Stanford University with David Tse. His research focuses on the science and engineering of Internet-scale consensus as a fundamental building block for decentralized systems, using tools from distributed systems, applied probability, networking and communications, and applied cryptography. While a Masters student at Technical University of Munich, he worked in information and coding theory. Joachim has received the Protocol Labs PhD Fellowship and the Stanford Graduate Fellowship.


To request accommodations for a disability please contact Andrea Mameniskis, amamenis@princeton.edu, at least one week prior to the event.

Follow us: Facebook Twitter Linkedin