Five Computer Science graduate students honored with 2021 Siebel Scholar awards
The Siebel Scholars Foundation announced the recipients of the 2021 Siebel Scholars award. Now in its 20th year, the Siebel Scholars program annually recognizes nearly 100 exceptional students from the world’s leading graduate schools of business, computer science, energy science and bioengineering. The awardees include five Princeton University graduate students in computer science, Sotiris Apostolakis, Kyle Genova, Wei Hu, John Li, and Divyarthi Mohan.
The 92 distinguished students of the Class of 2021 join past Siebel Scholars classes to form an unmatched professional and personal network of more than 1,500 scholars, researchers, and entrepreneurs. Through the program, this formidable group brings together diverse perspectives from business, science, and engineering to influence the technologies, policies, and economic and social decisions that shape the future.
“Every year, the Siebel Scholars continue to impress me with their commitment to academics and influencing future society. This year’s class is exceptional, and once again represents the best and brightest minds from around the globe who are advancing innovations in healthcare, artificial intelligence, the environment and more,” said Thomas M. Siebel, Chairman of the Siebel Scholars Foundation. “It is my distinct pleasure to welcome these students into this ever-growing, lifelong community, and I personally look forward to seeing their impact and contributions unfold.”
Founded in 2000 by the Thomas and Stacey Siebel Foundation, the Siebel Scholars program awards grants to 16 universities in the United States, China, France, Italy and Japan. Following a competitive review process by the deans of their respective schools on the basis of outstanding academic achievement and demonstrated leadership, the top graduate students from 27 partner programs are selected each year as Siebel Scholars and receive a $35,000 award for their final year of studies. On average, Siebel Scholars rank in the top five percent of their class, many within the top one percent.
Sotiris Apostolakis is a Ph.D. candidate at the Computer Science Department at Princeton University, advised by Professor David August. His research focuses on compilers, program analysis, and automatic parallelization. His goal is to make harnessing the full power of parallel architectures accessible to all programmers, regardless of their programming expertise. To that end, he has developed automatic parallelization and analysis frameworks that address longstanding problems in the field. He has done internships at Facebook and Intel, working on binary code analysis. Before joining Princeton, he earned his Diploma in Electrical and Computer Engineering at the National Technical University of Athens, Greece.
Kyle Genova is a Ph.D. student in the Computer Science Department at Princeton University. His research in computer vision focuses on how to represent and render three dimensional shapes in order to make new neural network-based algorithms possible. This work has been recognized by CVPR with oral and spotlight presentations, as well as a best paper nomination, and one paper has over 300,000 YouTube views. As an intern at Google, his research led to two Google Open Source projects, a differentiable rendering algorithm that was later implemented by TensorFlow Graphics, and three Google patent filings. He has been awarded the GRFP Fellowship by the National Science Foundation, a Gordon Y.S. Wu Fellowship in Engineering by Princeton University, and a Graduate Student Teaching Award by the Princeton University CS Department based on student reviews. Prior to beginning his Ph.D., Kyle received a B.A. in Computer Science from Cornell University.
Wei Hu is a PhD student in the Department of Computer Science at Princeton University, advised by Sanjeev Arora. Previously, he obtained his B.E. in Computer Science from Tsinghua University, where he was a member of Yao Class. He has also spent time as a research intern at research labs of Google and Microsoft. His current research interest is in the theoretical foundation of modern machine learning. In particular, his main focus is on obtaining theoretical understandings of the optimization and generalization mysteries in deep learning, as well as using theoretical insights to design practical and principled machine learning algorithms. Besides deep learning theory, his research papers also contribute to the areas of representation learning, online learning, optimization, and computing in data streams.
John Li is a second-year master's student in computer science with interests in programming languages and verification. He is currently working on CertiCoq, a project which aims to build a proved-correct compiler for a dependently typed functional language. John is developing a framework for automatically generating large parts of compiler optimization passes and their correctness proofs from high-level specifications. John received a B.A. in neuroscience from Princeton in 2019; as an undergraduate, he interned as a researcher in neuroscience labs at Princeton and proved one of CertiCoq's optimization passes correct. Before starting his master's degree, John spent a summer at HRL Laboratories as a research intern in the assured autonomy team.
Divyarthi Mohan is a PhD candidate in the Department of Computer Science at Princeton University, advised by Prof. Matt Weinberg. Previously, she obtained her MSc in Theoretical Computer Science from The Institute of Mathematical Sciences (Chennai), where she worked with Prof. Sayan Bhattacharya. She did her undergraduate studies in Mathematics at Indian Statistical Institute, Bangalore. During her PhD, Divyarthi did research internships at Microsoft and Google. Divyarthi's research interests broadly include Algorithms and Algorithmic Game Theory. She is primarily interested in understanding simplicity in algorithmic economics and when simple mechanisms give "good" outcomes. Her research focus is in multi-item mechanism design, and in developing the first sub-exponential approximation scheme in this line of work. She is also interested in Social Learning and understanding the effects various simple dynamics.