Untitled Document
Current PICASso Seminars - Spring 2010
 
"The Evolution of GPUs and GPU Computing"
March 10, 2010, 12:15 PM - 1:15 PM
120 Lewis Science Library

Speaker: Dr. David Luebke, Senior Manager, NVIDIA Research

Overview: Modern GPUs have emerged as the world's most successful parallel architecture. GPUs provide a level of massively parallel computation that was once the preserve of supercomputers like the MasPar and Connection Machine. Today's GPUs not only render video game frames, they also accelerate physics computations, video transcoding, image processing, astrophysics, protein folding, seismic exploration, computational finance, radio astronomy, etc. Enabled by platforms like the CUDA architecture, which provides a scalable programming model, researchers across science and engineering are accelerating applications in their discipline by up to two orders of magnitude. These success stories, and the tremendous scientific and market opportunities they open up, imply a new and diverse set of workloads that in turn carry implications for the evolution of future GPU architectures.

David Luebke helped found NVIDIA Research in 2006 after eight years on the faculty of the University of Virginia. Luebke received his Ph.D. under Fred Brooks at the University of North Carolina in 1998. His principal research interests are GPU computing and real-time computer graphics. Luebke's honors include the NVIDIA Distinguished Inventor award, the NSF CAREER and DOE Early Career PI awards, and the ACM Symposium on Interactive 3D Graphics "Test of Time Award". Dr. Luebke has co-authored a book, a SIGGRAPH Electronic Theater piece, a major museum exhibit visited by over 110,000 people, and dozens of papers, articles, chapters, and patents.


"The Transformation of Modern Science"
March 22, 2010, 12:30 PM - 1:30 PM
138 Lewis Science Library

Speaker: Dr. Harry E. Seidel, Acting Assistant Director, Directorate for Mathematical & Physical Sciences, NSF

Overview: Modern science is undergoing a profound transformation as it aims to tackle the complex problems of the 21st Century. It is becoming highly collaborative; problems as diverse as climate change, renewable energy, or the origin of gamma-ray bursts require understanding processes that no single group or community has the skills to address. At the same time, after centuries of little change, compute, data, and network environments have grown by 12 orders of magnitude in the last few decades. Cyberinfrastructure---the comprehensive set of deployable hardware, software, and algorithmic tools and environments supporting research, education, and increasingly collaboration across disciplines---is transforming all research disciplines and society itself. Motivating with examples ranging from astrophysics to emergency forecasting, I will describe new trends in science and the need, the potential, and the transformative impact of cyberinfrastructure. I will also discuss current and planned future efforts at the National Science Foundation to address them.

Edward Seidel is a physicist recognized for his work on numerical relativity and black holes, as well as in high-performance and grid computing. He earned his Ph.D. from Yale University in relativistic astrophysics. He was a professor at the Max Planck Institute for Gravitational Physics (Albert-Einstein-Institute, or AEI) in Germany from 1996-2003. There, Seidel founded and led AEI's numerical relativity and e-science groups, which became leaders in solving Einstein's equations using large-scale computers, and in distributed and grid computing. He also was a senior research scientist at the National Center for Supercomputing Applications and associate professor in the Physics Department at the University of Illinois, Urbana-Champaign.

In June 2008, the National Science Foundation selected Seidel as its director for the Office of Cyberinfrastructure (OCI). He began this position Sept. 1, 2008, in which he oversees advances in supercomputing, high-speed networking, data storage and software development on a national level. He has recently assumed the role of Acting Assistant Director for Mathematics and Physical Sciences at NSF.


"Gordon: A New Kind of Supercomputer for Data-Intensive Applications"
March 26, 2010, 3:30 PM - 4:30 PM
121 Lewis Science Library

Speaker: Dr. Mike Norman, Interim Director, San Diego Computing Center

Overview: Today's most powerful supercomputers have impressive floating point capabilities, but are rather unbalanced from the standpoint of memory and interconnect bandwidth, not to mention disk IO bandwidth. One measure of this balance is the Amdahl number, which is defined as the ratio of the IO bandwidth in Byte/s and the CPU performance in FLOPS. A balanced system has an Amdahl number of 1. The fastest machines on the Top500 list have Amdahl numbers in the range of 0.05 - 0.1, which makes them ideal for compute-intensive applications. In 2011 the San Diego Supercomputer Center (SDSC) will deploy a supercomputer architected for data-intensive applications like data mining and database which are growing in importance in science, engineering, medicine, and the social sciences. The architectural features of Gordon will be described in this presentation as well as the preliminary results on several applications using a prototype system at SDSC called Dash.

Michael L. Norman is Interim Director of the San Diego Supercomputer Center and Distinguished Professor of Physics at UCSD where he also directs the Laboratory for Computational Astrophysics. He received his B.S. from Caltech in 1975, and his Ph. D. from UC Davis in 1980. After holding appointments at the Lawrence Livermore and Los Alamos National Laboratories, the Max Planck Institute for Astrophysics, and the National Center for Supercomputing Applications, he joined the faculty at UCSD in 2000. His research focus is the computer simulation of astronomical phenomena using supercomputers, and the development of the numerical methods to carry them out. He is the author of over 200 papers on diverse topics including star formation, cosmic jets, and cosmological evolution. His computer visualizations have appeared in numerous educational TV shows and films, including PBS Nova and The Discovery Channel. He is the recipient of the Alexander von Humboldt Research Prize and the IEEE Sidney Fernbach Award. He was elected Fellow of the American Physical Society in 2001, and the American Academy of Arts and Sciences in 2005.


"Petascale Direct Numerical Simulation of Turbulent Combustion: Challenges and Opportunities"
March 29, 2010, 12:30 PM - 1:30 PM
138 Lewis Science Library

Speaker: Dr. Jacqueline H. Chen, Combustion Research Facility, Sandia National Laboratories

Overview: The rapid growth in computing power has presented both opportunities and challenges for high-fidelity simulations of turbulent reacting flows. The advent of petascale supercomputers has made it possible to glean fundamental physical insight into fine-grained 'turbulence-chemistry' interactions in canonical laboratory-scale turbulent flames with direct numerical simulations (DNS). The unique benchmark DNS data are also used to develop and validate predictive models used to design future fuel-efficient combustors utilizing alternative fuels for transportation and power generation. Such simulations are costly, requiring several million cpu-hours on a petascale computer, over a billion grid points, and generating 100's of terabytes of data. The turbulent combustion simulation enterprise involves collaborations with computer scientists in the performance monitoring and optimization of the software on petascale architectures, on automating workflow for providing runtime diagnostics, and on interactive data mining and visualization of time-varying multi-scale, multi-variety data. Aspects of these collaborations will be described along with combustion examples that that illustrate the science role of DNS. Outstanding challenges with extracting salient information from 100's of terabytes of data, and strategies for mapping DNS solvers to heterogeneous petascale architectures with accelerators will also be discussed.

Jacqueline H. Chen received her bachelors degree from Ohio State University (1981), her masters degree in Mechanical Engineering from University of California at Berkeley (1982), and her Ph.D. in Mechanical Engineering from Stanford University (1989). She joined Sandia in 1981 as a member of technical staff in the applied mechanics department, and following a leave to complete her Ph.D. at Stanford on a Sandia Doctoral Study Program (DSP) fellowship in 1986, returned to join the Combustion Research Facility in 1990. She received the Sandia Employee Recognition Award for Technical Excellence in 1998, was appointed to adjunct professor of Chemical Engineering at the U. Utah in 2001, and promoted to Distinguished Member of Technical Staff at Sandia in 2002. She received the DOE INCITE Award in 2005, 2007, 2008-2010, the DOE Office of Science Leadership Computing Facility Award in 2006 and the Asian American Engineer of the Year Award in 2009. She has served on numerous national committees and is a member of the DOE Advanced Scientific Computing Research Advisory Committee (ASCAC).


"Ultra-scale visualization with open-source"
April 19, 2010, 12:30 PM - 1:30 PM
121 Lewis Science Library

Speaker: Dr. Berk Geveci, Team Leader, Scientific Visualization and Informatics, Kitware Inc.

Overview: Several factors are driving the growth of scientific simulations. Computational power of computer clusters is growing while the price of individual computers is decreasing. Distributed computing techniques allow thousands of computer nodes to participate in a single simulation. The benefit of this computational power is that simulations are getting more accurate and useful for predicting complex phenomena. The downside to this growth in computational power is that enormous amounts of data need to be saved and analyzed to determine the results of the simulation. The ability to generate data has outpaced our ability to save and analyze the data. This bottleneck is throttling our ability to benefit from our improved computing resources. Dr. Geveci will discuss a few of Kitware’s projects that aim to close the gap between simulation and analysis. The main focus will be on in-situ processing (aka co-processing). In-situ processing involves tying the visualization/analysis code with the simulation code. We have been developing tools to enable this type of processing by extending the ParaView visualization framework. Dr. Geveci will also briefly talk about collaborative visualization using desktop and web applications as well as the analysis of dataset ensembles.

Dr. Geveci leads the scientific visualization and informatics teams at Kitware Inc. He is one of the leading developers of the ParaView visualization application and the Visualization Toolkit (VTK). His research interests include large scale parallel computing, computational dynamics, finite elements and visualization algorithms. Dr. Geveci received his B.S. in Mechanical Engineering from Bogazici University in 1994, his M.S. and Ph.D. in Mechanical Engineering from Lehigh University in 1996 and 1999, respectively. While at Lehigh University, he conducted research on subsonic and supersonic flow induced nonlinear vibrations, developing a new procedure for the solution of coupled flow and structural equations. In addition, he authored software for the study of separation in unsteady boundary layer flows and the visualization of the numerical and experimental results. After graduating from Lehigh University, Dr. Geveci completed a post-doctoral fellowship at the University of Pennsylvania during which he worked in the area of optimal control investigating applications in the control of hydrothermal instabilities.


"Colliding galaxies, rotating neutron stars and merging black holes - visualizing high dimensional data"
April 26, 2010, 12:30 PM – 1:30 PM
138 Lewis Science Library

Speaker: Dr. Werner Benger, Visualization Research Scientist at Center for Computation & Technology, Louisiana State University

Overview: Studying cosmological evolutions of galaxy clusters, deviation of light rays around black holes, gravitational waves produced by black hole mergers requires dealing with diverse discretization types such as particles sets, curvilinear grids, adaptive mesh refinements as well as tensor data beyond scalar and vector fields. Scientific Visualization is an essential tool to analyze and present data from computation or observations. Nowadays a huge variety of visualization tools exist, but applying them to a particular problem still faces unexpected hurdles and complications, starting frequently with the allegedly simple problem of using the right file format. Once data are provide for visualization, one often faces limitations due to new requirements that had not been considered originally, and presumably straightforward operations are not possible. A systematic approach treating data sets primarily based on their mathematical properties - instead of application-specific - reveals unexpected potential, thereby providing an "exploration framework" instead of just a set of tools with pre-defined capabilities.

In this talk the "visualization shell" Vish is presented, and its approach of modeling data sets using the mathematical background of fiber bundles, topology and geometric algebra. Generic Data sets for scientific visualization are formulated via a non-cyclic graph of six levels, each of them representing a semantic property of the data. Only two of them, the "Grid" and "Field" level are exposed to the end-user, thereby providing a intuitive way to construct complex visualizations from simple "building blocks". This approach will be exemplified via visualization methods that have been originally developed for astrophysical data, but transport over easily to medical visualization and computational fluid dynamics as well.

Dr.  Benger is visualization researcher at the Center for Computation & Technology at Louisiana State University. Before joining CCT, he worked at the Zuse-Institute Berlin to develop the Amira (now Avizo) visualization software in collaboration with the Max Planck Institute for Gravitational Physics (Albert Einstein Institute) in Potsdam, Germany. His research interests include visualization of astrophysical phenomena, focusing on tensor Fields. Benger has a master's degree in astronomy from the University of Innsbruck, Austria, and PhD in mathematics and computer science from the Free University Berlin.


 
Previous PICASso Seminar Series
 

 

Interdisciplinary Computational Seminars (Mondays)

Fall 2008

Fall 2007 - Spring 2008

Fall 2006 - Spring 2007

Fall 2005 - Spring 2006

Fall 2004 - Spring 2005

Fall 2003 - Spring 2004

 

 

Computation and Data Analysis in Biology and Information Sciences (Thursdays)

Fall 2008

Fall 2007 - Spring 2008

Fall 2007 - Spring 2008

Fall 2006 - Spring 2007

Fall 2005 - Spring 2006

Summer 2004

Fall 2004 - Spring 2005

Spring 2004

 

Want to be kept informed of computationally-oriented events at Princeton?
SUBSCRIBE to the PICASso mailing list by visiting https://lists.cs.princeton.edu/mailman/listinfo/picasso.
This page also contains information on how to UNSUBSCRIBE.

 
Other seminars at Princeton

Chemical Engineering

Chemistry

Computer Science

Ecology and Evolutionary Biology

Electrical Engineering

Economics

Institute for Advanced Study: Math

Geoscience

Mathematics

Mechanical and Aerospace Engineering

Molecular Biology

Neuroscience Lunch Seminar Series

Operations Research and Financial Engineering

PACM

Physics

Plasma Physics Lab

Princeton Environmental Institute

Princeton Materials Institute

   

Want to add your seminar to this page?