I am a Ph.D. student in the Department of Computer Science at Princeton University, where I work with Prof. Jia Deng in the Princeton Vision & Learning Lab. I am also working closely with Prof. Olga Russakovsky. I am interested in formal reasoning, machine learning, and artificial intelligence in general. My current research focuses on how machine learning techniques can be applied to solve automated theorem proving — a long-standing research area that was dominated by formal methods. Prior to that, I worked on human pose estimation, action detection, and visual relationship understanding.
I received my master’s degree from the University of Michigan and my bachelor’s degree from Tsinghua University.
[杨凯峪] [Email: firstname.lastname@example.org] [CV]
7/2019 I will be heading to ICCV 2019. Feel free to drop me a line for a meetup.
4/2019 Our paper “Learning to Prove Theorems via Interacting with Proof Assistants” is accepted to ICML 2019.
Learning to Prove Theorems via Interacting with Proof Assistants
Kaiyu Yang and Jia Deng
International Conference on Machine Learning (ICML), 2019
[paper] [code] [slides] [poster]
SpatialSense: An Adversarially Crowdsourced Benchmark for Spatial Relation Recognition
Kaiyu Yang, Olga Russakovsky, and Jia Deng
International Conference on Computer Vision (ICCV), 2019
Stacked Hourglass Networks for Human Pose Estimation
Alejandro Newell, Kaiyu Yang, and Jia Deng
European Conference on Computer Vision (ECCV), 2016
Theorem Proving in Proof Assistants: We use machine learning to automatically prove theorems, including not only theorems in math but also theorems describing the behavior of software and hardware systems. Current theorem provers usually search for proofs represented at a low level, such as first-order logic and resolutions. Therefore they lack the high-level reasoning and problem-specific insights common to humans.
In contrast, we use a powerful set of tools called proof assistants (a.k.a. interactive theorem provers). These are software that assists human experts in proving theorems. They thus provide a high-level framework that is close to human mathematical reasoning. Instead of humans, we develop machine learning agents to interact with proof assistants. Our agent can learn from human interactions by imitation learning using a large amount of data available online. We use this data to construct a large-scale dataset for training/evaluating the agent. We also develop a baseline model that can prove many new theorems not provable by existing methods.
Adversarial Crowdsourcing: Benchmarks in vision and language suffer from dataset bias—models can perform exceptionally well by exploiting simple cues without even looking at the image, which undermines the benchmark’s value in measuring visual reasoning abilities. We propose adversarial crowdsourcing to reduce dataset bias. Annotators are explicitly tasked with finding examples that are difficult to predict using simple cues such as 2D spatial configuration or language priors. Specifically, we introduce SpatialSense, a challenging dataset for spatial relation recognition collected via adversarial crowdsourcing.
Hourglass Networks: We introduce the hourglass network: a novel convolutional network architecture for human pose estimation. It is now a standard component of most state-of-the-art methods for pose estimation.
- During my undergraduate, I served as a TA for Data Structures and Algorithms at Tsinghua University, which was offered to both students on-campus and the general public as a massive open online course (MOOC). I received the Outstanding Teaching Assistant Award twice in 2015 and 2016. Besides regular TA responsibilities such as grading and office hours, I also dealt with the online infrastructure for MOOC.