Quick links

Distinguished Colloquium Series Speaker

Building Symbiotic Collaborations with HBCU STEM Faculty and Departments

Date and Time
Thursday, January 18, 2024 - 12:30pm to 1:30pm
Location
Computer Science 402
Type
Distinguished Colloquium Series Speaker
Host
Olga Russakovsky

Kinnis Gosha
This talk will strategically discuss HBCU STEM departments and how to engage them. Possessing firsthand experience with more than 18 awarded NSF grants in the last eight years, Dr. Kinnis Gosha will facilitate a candid discussion. Topics include (1) Relationship building with HBCU faculty, students, and administrators as subject matter experts, research collaborators, and colleagues. (2) Intentional budgets, publications, and measurable outcomes aligned with HBCU research goals and strategic plans. (3) What works and rarely works when collaborating on proposals, implementing projects, reporting data, and evaluating programs. Participants in this session will also leave with an HBCU CS Engagement Checklist to facilitate symbiotic partnerships.

Bio: Dr. Kinnis Gosha (pronounced Go-Shay) is the Hortinius I. Chenault Endowed Professor of Computer Science, Academic Program Director for Software Engineering, and Executive Director of the Morehouse Center for Broadening Participation in Computing. Dr. Gosha’s research interests include conversational AI, social media data analytics, computer science education, broadening participation in computing, and culturally relevant computing. Gosha also leads Morehouse’s Software Engineering degree program, where he builds collaborations with industry partners to provide his students with a variety of experiential learning experiences. In October of 2022, Gosha took over as the Principal Investigator of the Institute for African-American Mentoring in Computing Sciences (IAAMCS), a Broadening Participation in Computing Alliance, funded by the National Science Foundation.   

To date, 21 undergraduate researchers in his lab have gone on to pursue a doctoral degree in computing. PI Gosha currently has over 60 peer-reviewed publications in the area of Broadening Participation in Computing (BPC). Since arriving at Morehouse (2011), he has included undergraduate student researchers as co-authors in 26 peer-reviewed manuscripts. Gosha is very active in the BPC community serving as a regular paper and poster reviewer for the Tapia, SIGCSE, and RESPECT conferences. Currently, Gosha is the Co-Chair of the IEEE Special Technical Community for Broadening Participation and a newly elected board member of both the Computing Research Association and the National Science Foundation Computer and Information Science and Engineering (CISE) Advisory Committee. 


This talk will not be live streamed or recorded.

Highly accurate protein structure prediction with deep learning

Date and Time
Monday, September 25, 2023 - 4:30pm to 5:30pm
Location
Friend Center 101
Type
Distinguished Colloquium Series Speaker
Speaker
John Jumper, from DeepMind
Host
Ellen Zhong

John Jumper
Our work on deep learning for biology, specifically the AlphaFold system, has demonstrated that neural networks are capable of highly accurate modeling of both protein structure and protein-protein interactions. In particular, the system shows a remarkable ability to extract chemical and evolutionary principles from experimental structural data. This computational tool has repeatedly shown the ability to not only predict accurate structures for novel sequences and novel folds but also to do unexpected tasks such as selecting stable protein designs or detecting protein disorder. In this lecture, I will discuss the context of this breakthrough in the machine learning principles, the diverse data and rigorous evaluation environment that enabled it to occur, and the many innovative ways in which the community is using these tools to do new types of science. I will also reflect on some surprising limitations -- insensitivity to mutations and the lack of context about the chemical environment of the proteins -- and how this may be traced back to the essential features of the training process. Finally, I will conclude with a discussion of some ideas on the future of machine learning in structure biology and how the experimental and computational communities can think about organizing their research and data to enable many more such breakthroughs in the future.

Bio: John Jumper received his PhD in Chemistry from the University of Chicago, where he developed machine learning methods to simulate protein dynamics. Prior to that, he worked at D.E. Shaw Research on molecular dynamics simulations of protein dynamics and supercooled liquids. He also holds an MPhil in Physics from the University of Cambridge and a B.S. in Physics and Mathematics from Vanderbilt University. At DeepMind, John is leading the development of new methods to apply machine learning to protein biology.


To request accommodations for a disability, please contact Emily Lawrence at emilyl@cs.princeton.edu at least one week prior to the event.

This talk will be live streamed on Princeton University Media Central Live.

A Quiet Revolution in Robotics

Date and Time
Tuesday, September 12, 2023 - 4:30pm to 5:30pm
Location
Friend Center 101
Type
Distinguished Colloquium Series Speaker
Speaker
Host
Jia Deng

Vladlen Koltun
Bio: Vladlen Koltun received his PhD in 2002 and has worked across multiple fields of computer science. He has mentored more than 50 PhD students, postdocs, research scientists, and PhD student interns, many of whom are now successful research leaders. Until 2021, he had served as the Chief Scientist for Intelligent Systems at Intel, where he built an international research lab, based on four continents, that produced high-impact results in robotics, computer vision, image synthesis, machine learning, and other areas.


This event is co-sponsored by the Department of Computer Science and the Center for Statistics and Machine Learning

To request accommodations for a disability, please contact Emily Lawrence at emilyl@cs.princeton.edu at least one week prior to the event.

The talk recording is viewable here.

The Future of Cloud Infrastructure for Large AI Models

Date and Time
Monday, April 24, 2023 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Speaker
Host
Adam Finkelstein

Corey Sanders
Corey Sanders, CVP, Microsoft Cloud for Industry, will join us to share how Microsoft is building the infrastructure requirements for scaling AI and Large Language Model (LLM) services, with a specific focus on Azure, GPU sourcing and the architecture of OpenAI-specific data centers.

Corey will highlight the impact of advanced AI models, such as Github CoPilot, including the workings and quality of these tools and models with real-world examples of how advanced AI models are already transforming the software development landscape.

Bio: Corey Sanders is the Corporate Vice President for Microsoft Cloud for Industry, an organization dedicated to serving our customers with tailored industry solutions as they transform into successful digital businesses.  

Prior to this role, Corey led Microsoft Commercial Solution Areas, owning sales strategy and corporate technical sales across Solution Areas and Teams that include Digital Application Innovation, Azure Infrastructure & IoT, Azure Data & AI, Business Applications, Security, and Modern Workplace. His focus also included selling the full value of Microsoft cross-cloud solutions and advancing the technical depth of the Microsoft Solutions team.   

Earlier, Corey was Head of Product for Azure Compute and the founder of Microsoft Azure’s Infrastructure as a Service (IaaS) business. During that time, he was responsible for products, strategy and technical vision aligned to core Azure compute services. He also led program management for multiple Azure services. Earlier in his career, Corey was a developer in the Windows Serviceability team with ownership across the networking and kernel stack for Windows. 

In his first role at Microsoft in 2003, Corey served as an intern on the Windows team, after graduating from Princeton University, where he earned his Bachelor S.E. in Computer Science.  

Today, Corey resides with his family in New Jersey. 


To request accommodations for a disability please contact Emily Lawrence, emilyl@cs.princeton.edu, at least one week prior to the event.

The Four Pillars of Machine Learning

Date and Time
Monday, May 1, 2023 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Speaker
Host
Elad Hazan

Kevin Murphy
I will present a unified perspective on the field of machine learning, following the structure of my recent book, "Probabilistic Machine Learning: Advanced Topics" which is centered on the "4 pillars of ML": predictions, decisions, discovery and generation. For each of these tasks, I will give a brief summary of some recent methods, including a few of my own contributions.

Bio: Kevin was born in Ireland, but grew up in England. He got his BA from U. Cambridge, his MEng from U. Pennsylvania, and his PhD from UC Berkeley. He then did a postdoc at MIT, and was an associate professor of computer science and statistics at the University of British Columbia in Vancouver, Canada, from 2004 to 2012. After getting tenure, he went to Google in California on his sabbatical and then ended up staying. He currently runs a team of 6 researchers inside of Google Brain; the team works on generative models, Bayesian inference, and various other topics. Kevin has published over 125 papers in refereed conferences and journals, as well 3 textbooks on machine learning published in 2012, 2022 and 2023 by MIT Press. (The 2012 book was awarded the DeGroot Prize for best book in the field of Statistical Science.) Kevin was also the (co) Editor-in-Chief of JMLR 2014--2017.


This talk will be live streamed here: https://mediacentrallive.princeton.edu/

To request accommodations for a disability please contact Emily Lawrence, emilyl@cs.princeton.edu, at least one week prior to the event.

Making the Invisible Visible: Observing Complex Software Dynamics

Date and Time
Tuesday, November 15, 2022 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Brian Kernighan

Richard Sites
From mobile and cloud apps to video games to driverless vehicle control, more and more software is time-constrained: it must deliver reliable results seamlessly, consistently, and virtually instantaneously. If it doesn't, customers are unhappy--and sometimes lives are put at risk. When complex software underperforms or fails, identifying the root causes is difficult and, historically, few tools have been available to help, leaving application developers to guess what might be happening. How can we do better?

The key is to have low-overhead observation tools that can show exactly where all the  elapsed time goes in both normal responses and in delayed responses. Doing so makes visible each of the seven possible reasons for such delays, as we show.

Bio: Richard L. Sites wrote his first computer program in 1959 and has spent most of his career at the boundary between hardware and software, with a particular interest in CPU/software performance interactions. His past work includes VAX microcode, DEC Alpha co-architect, and inventing the performance counters found in nearly all processors today. He has done low-overhead microcode and software tracing at DEC, Adobe, Google, and Tesla. Dr. Sites earned his PhD at Stanford in 1974; he holds 66 patents and is a member of the US National Academy of Engineering.


To request accommodations for a disability, please contact Emily Lawrence at emilyl@cs.princeton.edu at least one week prior to the event.

This talk will be recorded and live streamed on Princeton University Media Central.  See link here.

Shading Languages and the Emergence of Programmable Graphics Systems

Date and Time
Monday, November 14, 2022 - 4:30pm to 5:30pm
Location
Friend Center 101
Type
Distinguished Colloquium Series Speaker
Host
Adam Finkelstein

Pat Hanrahan
A major challenge in using computer graphics for movies and games is to create a rendering system that can create realistic pictures of a virtual world. The system must handle the variety and complexity of the shapes, materials, and lighting that combine to create what we see every day. The images must also be free of artifacts, emulate cameras to create depth of field and motion blur, and compose seamlessly with photographs of live action.

Pixar's RenderMan was created for this purpose, and has been widely used in feature film production. A key innovation in the system is to use a shading language to procedurally describe appearance. Shading languages were subsequently extended to run in real-time on graphics processing units (GPUs), and now shading languages are widely used in game engines. The final step was the realization that the GPU is a data-parallel computer, and the the shading language could be extended into a general-purpose data-parallel programming language. This enabled a wide variety of applications in high performance computing, such as physical simulation and machine learning, to be run on GPUs. Nowadays, GPUs are the fastest computers in the world. This talk will review the history of shading languages and GPUs, and discuss the broader implications for computing.

Bio: Pat Hanrahan is the Canon Professor of Computer Science and Electrical Engineering in the Computer Graphics Laboratory at Stanford University. His research focuses on rendering algorithms, graphics systems, and visualization.

Hanrahan received a Ph.D. in biophysics from the University of Wisconsin-Madison in 1985. As a founding employee at Pixar Animation Studios in the 1980s, Hanrahan led the design of the RenderMan Interface Specification and the RenderMan Shading Language. In 1989, he joined the faculty of Princeton University. In 1995, he moved to Stanford University. More recently, Hanrahan served as a co-founder and CTO of Tableau Software.  He has received three Academy Awards for Science and Technology, the SIGGRAPH Computer Graphics Achievement Award, the SIGGRAPH Stephen A. Coons Award, and the IEEE Visualization Career Award. He is a member of the National Academy of Engineering and the American Academy of Arts and Sciences. In 2019, he received the ACM A. M. Turing Award.


To request accommodations for a disability, please contact Emily Lawrence at emilyl@cs.princeton.edu at least one week prior to the event.

This talk will be recorded and live streamed on Princeton University Media Central.  See link here.

Theoretical Reflections on Quantum Supremacy

Date and Time
Wednesday, May 19, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
Distinguished Colloquium Series Speaker
Host
Sanjeev Arora & Andrew Houck

Recording available here


Umesh Vazirani
The recent demonstration of quantum supremacy by Google is a first step towards the era of small to medium scale quantum computers. In this talk I will explain what the experiment accomplished and the theoretical work it is based on, as well as what it did not accomplish and the many theoretical and practical challenges that remain. I will also describe recent breakthroughs in the design of protocols for the testing and benchmarking of quantum computers, a task that has deep computational and philosophical implications. Specifically, this leads to protocols for scalable and verifiable quantum supremacy, certifiable quantum random generation and verification of quantum computation.

Bio:
Umesh Vazirani is Strauch Distinguished Professor of Computer Science at UC Berkeley, and Director of the Berkeley Quantum Computing Center (BQIC). His 1993 paper with Ethan Bernstein laid the foundations of quantum complexity theory, and his research has touched on many facets of quantum computation, including Quantum Algorithms, Quantum Hamiltonian Complexity and Interactive classical testing of quantum devices. Vazirani is co-inventor of the Bid Scaling algorithm for the AdWords auction which is widely used by Internet search companies, and co-winner of the Fulkerson Prize for the ARV graph partitioning algorithm. He is member of the NAS, and co-author of two books: An Introduction to Computational Learning Theory (MIT Press) and Algorithms (McGraw-Hill).

To request accommodations for a disability please contact Emily Lawrence, emilyl@cs.princeton.edu, at least one week prior to the event.

Monoculture and Simplicity in an Ecosystem of Algorithmic Decision-Making

Date and Time
Thursday, April 29, 2021 - 4:30pm to 5:30pm
Location
Zoom Webinar (off campus)
Type
Distinguished Colloquium Series Speaker
Host
Sanjeev Arora

Click here for recording 


Jon Kleinberg
Algorithms are increasingly used to aid decision-making in high-stakes settings including employment, lending, healthcare, and the legal system. This development has led to an ecosystem of growing complexity in which algorithms and people interact around consequential decisions, often mediated by organizations and firms that may be in competition with one another.

We consider two related sets of issues that arise in this setting. First, concerns have been raised about the effects of algorithmic monoculture, in which multiple decision-makers all rely on the same algorithm. In a set of models drawing on minimal assumptions, we show that when competing decision-makers converge on the use of the same algorithm as part of a decision pipeline, the result can potentially be harmful for social welfare even when the algorithm is more accurate than any decision-maker acting on their own. Second, we consider some of the canonical ways in which data is simplified over the course of these decision-making pipelines, showing how this process of simplification can introduce sources of bias in ways that connect to principles from the psychology of stereotype formation.

The talk will be based on joint work with Sendhil Mullainathan and Manish Raghavan.

Bio:
Jon Kleinberg is the Tisch University Professor in the Departments of Computer Science and Information Science at Cornell University. His research focuses on the interaction of algorithms and networks, the roles they play in large-scale social and information systems, and their broader societal implications. He is a member of the National Academy of Sciences and the National Academy of Engineering, and the recipient of MacArthur, Packard, Simons, Sloan, and Vannevar Bush research fellowships, as well awards including the Harvey Prize, the Nevanlinna Prize, and the ACM Prize in Computing.


To request accommodations for a disability please contact Emily Lawrence, emilyl@cs.princeton.edu, at least one week prior to the event.

Language, Brain, and Computation

Date and Time
Friday, February 19, 2021 - 12:30pm to 1:30pm
Location
Zoom Webinar (off campus)
Type
Distinguished Colloquium Series Speaker
Host
Sanjeev Arora

Watch the recording here


Christos Papadimitriou
How does the brain beget the mind?  How do molecules, cells and synapses effect reasoning, intelligence, language?   Despite dazzling progress in experimental neuroscience, as well as in cognitive science at the other extreme of scale, we do not seem to be making progress in the overarching question -- the gap is huge and a completely new approach seems to be required.  As Richard Axel recently put it:  "We don't have a logic for the transformation of neural activity into thought [...]."  

What kind of formal system would qualify as this "logic"? 

I will introduce the Assembly Calculus, a computational system whose basic data structure is the assembly -- assemblies are large populations of neurons representing concepts, words, ideas, episodes, etc.,

The Assembly Calculus is biologically plausible in the following two orthogonal senses: Its primitives are properties of assemblies observed in experiments or useful for explaining experiments, and can be provably (through both mathematical proof and simulations) "compiled down" to the activity of neurons and synapses.  I will also explain why the Assembly Calculus is especially powerful in exploring how language happens in the brain.

Bio: Christos Harilaos Papadimitriou is the Donovan Family Professor of Computer Science at Columbia University.  Before joining Columbia in 2017, he was a professor at UC Berkeley for the previous 22 years, and before that he taught at Harvard, MIT, NTU Athens, Stanford, and UCSD.  He has written five textbooks and many articles on algorithms and complexity, and their applications to optimization, databases, control, AI, robotics, economics and game theory, the Internet, evolution, and the brain.  He holds a PhD from Princeton (1976), and eight honorary doctorates, including from ETH, University of Athens, EPFL, and Univ. de Paris Dauphine.  He is a member of the National Academy of Sciences of the US, the American Academy of Arts and Sciences, and the National Academy of Engineering, and he has received the Knuth prize, the Go"del prize, the von Neumann medal, as well as the 2018 Harvey Prize by Technion.  In 2015 the president of the Hellenic republic named him commander of the order of the Phoenix.  He has also written three novels: “Turing ,” “Logicomix”  and his latest “Independence.”


This talk will be recorded.  To request accommodations for a disability please contact Emily Lawrence, emilyl@cs.princeton.edu, at least one week prior to the event.

Follow us: Facebook Twitter Linkedin