Quick links

FPO

Daniel Suo FPO "Scaling Machine Learning in Practice"

Date and Time
Wednesday, May 10, 2023 - 3:00pm to 5:00pm
Location
Not yet determined.
Type
FPO

details to follow

Christopher Hodsdon FPO

Date and Time
Tuesday, May 9, 2023 - 11:00am to 1:00pm
Location
Not yet determined.
Type
FPO

details to follow

Theano Stavrinos FPO

Date and Time
Tuesday, May 9, 2023 - 1:00pm to 3:00pm
Location
Not yet determined.
Type
FPO

details to follow

Aninda Manocha FPO

Date and Time
Monday, May 8, 2023 - 3:00pm to 4:30pm
Location
Not yet determined.
Type
FPO

Details to follow

Sinong Geng FPO

Date and Time
Thursday, April 13, 2023 - 2:30pm to 4:30pm
Location
Computer Science 402
Type
FPO

Sinong Geng will present his FPO "Model-Regularized Machine Learning for Decision-Making" on Thursday, April 13, 2023 at 2:30 PM in COS 402 and Zoom.

Location: Zoom link: https://princeton.zoom.us/j/95544518239

The members of Sinong’s committee are as follows:
Examiners: Ronnie Sircar (Adviser), Ryan Adams, Karthik Narasimhan
Readers: Sanjeev Kulkarni, Tom Griffiths

A copy of his thesis is available upon request.  Please email gradinfo@cs.princeton.edu if you would like a copy of the thesis. 
 
Everyone is invited to attend his talk. 
 
Abstract follows below:
Thanks to the availability of more and more high-dimensional data, recent developments in machine learning (ML) have redefined decision-making in numerous domains. However, the battle against the unreliability of ML in decision-making caused by the lack of high-quality data has not ended and is an important obstacle in almost every application. Some questions arise like (i) Why does an ML method fail to replicate the decision-making behaviors in a new environment? (ii) Why does ML give unreasonable interpretations for existing expert decisions? (iii) How to make decisions under a noisy and high-dimensional environment? Many of these issues can be attributed to the lack of an effective and sample-efficient model underlying ML methods.

This thesis presents our research efforts dedicated to developing model-regularized ML for decision-making to address the above issues in areas of inverse reinforcement learning and reinforcement learning, with applications to customer/company behavior analysis and portfolio optimization. Specifically, by applying regularizations derived from suitable models, we propose methods for two different goals: (i) to better understand and replicate existing decision-making of human experts and businesses; (ii) to conduct better sequential decision-making, while overcoming the need for large amounts of high-quality data in situations where there might not be enough.

Xingyuan Sun FPO "Gradient-Based Shape Optimization for Engineering Using Machine Learning"

Date and Time
Friday, February 17, 2023 - 1:00pm to 3:00pm
Location
Not yet determined.
Type
FPO

Xingyuan Sun will present his FPO "Gradient-Based Shape Optimization for Engineering Using Machine Learning" on Friday, February 17, 2023 at 1:00 PM in CS 402 and via Zoom.

Location: CS 402, Zoom Link: https://princeton.zoom.us/j/98360015495.

The members of Xingyuan’s committee are as follows:
Examiners: Ryan Adams (Co-Adviser), Szymon Rusinkiewicz (Co-Adviser), Olga Russakovsky
Readers: Sigrid Adriaenssens, Felix Heide

A copy of his thesis is available upon request.  Please email  if you would like a copy of the thesis.
 
Everyone is invited to attend his talk.
 
Abstract follows below:
Shape design problems are important in engineering, e.g., trajectory planning for robot arms, material distribution optimization, etc. However, existing works usually solve these tasks without the help of gradients, whose efficiency can be limited. We formalize design problems as constrained optimization tasks and propose to use gradient-based optimizers with automatic differentiation to solve them. Specifically, we use the adjoint method when the underlying physical process can be characterized by PDEs. In Chapter 2, we solve for extruder paths of 3D printing that can compensate for the deformation caused by the fiber printing process. As the printing process is complex and difficult to model, we create a synthetic dataset and fit it using a neural network to get a differentiable surrogate of the printing simulator. We further speed up the optimization process by using a neural network to amortize it, sacrificing a bit of accuracy but getting much faster, real-time inferences. In Chapter 3, we study the task of fiber path planning, figuring out where to lay reinforcing fibers in plastic for 3D printing, maximizing stiffness of the composite. We build a simulator by solving the linear elastic equations and use the adjoint method for gradient calculation and BFGS for fiber path optimization. In Chapter 4, we investigate the problem of dovetail joint shape optimization for stiffness. To model the contact between two parts of a joint, we build a simulator by alternatively solving one side of the joint while fixing the other side. We use the adjoint method for gradient computation and gradient descent for optimization. All methods across the projects are tested both in simulation and real-world experiments, showing our approach produces high-quality designs, and also the amortized approach provides real-time inference while achieving a comparable design quality.

Charlie Murphy FPO "Relational Verification of Distributed Systems via Weak Simulations"

Date and Time
Friday, January 27, 2023 - 11:00am to 1:00pm
Location
Not yet determined.
Type
FPO

Advisor: Zachary  Kincaid
Readers: Arti Gupta and Lennart Beringer
Examiners: Wyatt Llyod and Dave Walker

 

 

Location: TBD

Paul Krueger FPO

Date and Time
Wednesday, January 18, 2023 - 2:00pm to 4:00pm
Location
Computer Science 402
Type
FPO

Paul Krueger will present his FPO "Metacognition: toward a computational framework for improving our minds" on Wednesday, January 18, 2023 at 2:00 PM in CS 402.

Location: CS 402

The members of Paul’s committee are as follows:
Examiners: Tom Griffiths (Adviser), Jonathan Cohen, Ryan Adams
Readers: Karthik Narasimhan, Nathaniel Daw

A copy of his thesis will be available upon request.  Please email gradinfo@cs.princeton.edu if you would like a copy of the thesis.

Everyone is invited to attend his talk.

Abstract follows below:

In this dissertation I will show how reinforcement learning (RL) can be applied to the inner workings of cognition. The usual application of RL is to understand human behavior or build intelligent machines interacting in the external world. The same RL formalism can be inverted onto cognitive processes themselves, resulting in a normative account of how to explore and select mental computations, referred to as metacognitive RL. This framework can 1) be used to generate observable behavioral predictions, 2) provide a resource-rational benchmark for both assessing and improving cognition, and 3) motivate cognitive process models based on interacting RL systems. The formalism of metacognitive RL rests on meta-level Markov Decision Processes (meta-MDPs), which provide a general-purpose computational framework that can also make task-specific predictions.

The first study applies the resource-rational framework to risky choice resulting in the identification of heuristics and accurate predictions about how people adapt their use of heuristics. The second study uses the same metacognitive RL framework to predict which structures of the environment will enhance metacognitive learning in humans during a planning task. In the third study, rather than manipulate the decision environment—which is often infeasible to do in the real-world—the metacognitive RL framework is used to produce feedback in the form of pseudorewards, resulting in faster metacognitive learning in a related planning task. Next, a new cooperative RL architecture is proposed. This approach also uses pseudorewards to promote learning, but rather than generate the pseudorewards from a computational model, it is proposed that they may be produced internally by a distinct RL system. The successful application of the metacognitive RL framework to understand and improve cognitive function depends critically on developing machine learning methods to solve these problems. In the final chapter, I briefly explore the application of a recently proposed machine learning method for solving meta-MDPs.

Rachit Dubey FPO

Date and Time
Friday, January 13, 2023 - 9:00am to 11:00pm
Location
Computer Science 402
Type
FPO

Rachit Dubey will present his FPO "The successes and failures of human drives" on Friday, January 13, 2023 at 10:00 AM in CS 402.

Location: CS 402

The members of Rachit’s committee are as follows:
Examiners: Tom Griffiths (Adviser), Ryan Adams, Jonathan Cohen
Readers: Tania Lombrozo, Karthik Narasimhan

A copy of his thesis is available upon request.  Please email gradinfo@cs.princeton.edu if you would like a copy of the thesis.

Everyone is invited to attend his talk.

Abstract follows below:

Even in the absence of external rewards, we have internal motives that drive us to acquire information, pursue tasks, learn new things, etc. What is it that drives us? Under what conditions do these drives become maladaptive? In this dissertation, I employ computational modeling, behavioral experiments, and agent-based simulations to help develop a more complete picture of our intrinsic drives and motivations. In Chapter 2, I present a rational account of curiosity that unifies previous distinct theories in a single framework and explains a wide range of findings about human curiosity. Based on the insights from this framework, in Chapter 3, I present a behavioral intervention that can pique people’s curiosity for everyday scientific topics. Chapter 4 develops a computational model of Aha! moments and provides an explanation for why Aha! moments feel so rewarding. In Chapter 5, using the computational framework of reinforcement learning and the idea of reward design, I study the human drive to keep wanting more. I show that even though this seemingly maladaptive drive leads to unhappiness and overconsumption, it nevertheless plays an important role in promoting adaptive behavior and might be a deeply rooted bias of the human mind. Finally, in Chapter 6, I present an intervention that targets the wealthy and uses non-material social incentives to reduce their water consumption levels. Taken together, this work makes progress towards understanding the origins, strengths, and shortcomings of human drives, and illuminates the psychological forces that shape human behavior and suggests new ways to guide them.

Andrew Jones will present his FPO "Probabilistic models for structured biomedical data"

Date and Time
Friday, December 16, 2022 - 9:30am to 11:30am
Location
Computer Science 402
Type
FPO

Andrew Jones will present his FPO "Probabilistic models for structured biomedical data" on Friday, December 16, 2022 at 9:30 AM in COS 402 and Zoom.

Location: Zoom link: https://princeton.zoom.us/j/95479201507

The members of Andrew’s committee are as follows:
Examiners: Barbara Engelhardt (Adviser), Ben Raphael, Adji Bousso Dieng
Readers: Jonathan Pillow, Olga Russakovsky

A copy of his thesis will be available, upon request, two weeks before the FPO.  Please email gradinfo@cs.princeton.edu if you would like a copy of the thesis.

Everyone is invited to attend his talk.

Abstract follows below:

Modern biomedical datasets—from molecular measurements of gene expression to pathology images—hold promise for discovering new therapeutics and probing basic questions about the behavior of cells. Thoughtful statistical modeling of these complex, high-dimensional data is crucial to elucidate robust scientific findings. A common assumption in data analysis that the data samples are independent and identically distributed. However, this assumption is nearly always violated in practice. This is especially true in the setting of biomedical data, which often exhibit some amount of structure, such as subgroups of patients, cells, or tissue types or other correlation structure among the samples.

In this body of work, I propose data analysis and experimental design frameworks to account for several types of highly-structured biomedical data. These approaches, which take the form of Bayesian models and associated inference algorithms, are specifically tailored for datasets with group structure, multiple data modalities, and spatial organization of samples.

In the first line of work, I propose a model for contrastive dimension reduction that decomposes the sources of variation in samples that belong to case and control conditions. Second, I propose a computational framework for aligning spatially-resolved genomics data into a common coordinate system that accounts for spatial correlation among the samples and models multiple data modalities. Finally, I propose a family of methods for optimally designing spatially-resolved genomics experiments that is tailored to the highly-structured data collection process of these studies. Together, this body of work advances the field of biomedical data analysis by developing models that directly exploit common types of structure within these data and demonstrating the advantage of these modeling approaches across an array of data types.

Follow us: Facebook Twitter Linkedin