Quick links

Distinguished Lecture Series

Deep Learning to Solve Challenging Problems

Date and Time
Tuesday, April 24, 2018 - 4:30pm to 5:30pm
Location
Friend Center 101
Type
Distinguished Lecture Series
Speaker
Host
CS Professor Kai Li

For the recording of this talk, follow this link.
 

Jeff Dean
For the past six years, the Google Brain team has conducted research on difficult problems in artificial intelligence, on building large-scale computer systems for machine learning research, and, in collaboration with many teams at Google, on applying our research and systems to dozens of Google products.  We have made significant progress in computer vision, speech recognition, language understanding, machine translation, healthcare, robotic control, and other areas. Our group has open-sourced the TensorFlow system, a widely popular system designed to easily express machine learning ideas, and to quickly train, evaluate and deploy machine learning systems.  In this talk, I'll highlight some of the research and computer systems work we've done with an eye towards how it can be used to solve challenging problems.

This talk describes joint work with many people at Google.

Bio:
Jeff Dean joined Google in 1999 and is currently a Google Senior Fellow in Google's Research Group, where he co-founded and leads the Google Brain team, Google's deep learning and artificial intelligence research team.  He and his collaborators are working on systems for speech recognition, computer vision, language understanding, and various other machine learning tasks. He has co-designed/implemented many generations of Google's crawling, indexing, and query serving systems, and co-designed/implemented major pieces of Google's initial advertising and AdSense for Content systems. He is also a co-designer and co-implementor of Google's distributed computing infrastructure, including the MapReduce, BigTable and Spanner systems, protocol buffers, the open-source TensorFlow system for machine learning, and a variety of internal and external libraries and developer tools.  

Jeff received a Ph.D. in Computer Science from the University of Washington in 1996, working with Craig Chambers on whole-program optimization techniques for object-oriented languages.  He received a B.S. in computer science & economics from the University of Minnesota in 1990. He is a member of the National Academy of Engineering, and of the American Academy of Arts and Sciences, a Fellow of the Association for Computing Machinery (ACM), a Fellow of the American Association for the Advancement of Sciences (AAAS), and a winner of the ACM Prize in Computing and the Mark Weiser Award.

*A reception in the Friend Center Upper Atrium will follow immediately after the talk.
**For campus parking information, please visit this link.

 

The Algorithmic Lens: How the Computational Perspective is Transforming the Sciences

Date and Time
Tuesday, February 19, 2008 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Lecture Series
Speaker
Christos Papadimitriou, from UC Berkeley
Host
Sanjeev Arora
Computational research transforms the sciences (physical, mathematical, life or social) not just by empowering them analytically, but mainly by providing a novel and powerful perspective which often leads to unforeseen insights. Examples abound: quantum computation provides the right forum for questioning and testing some of the most basic tenets of quantum physics, while statistical mechanics has found in the efficiency of randomized algorithms a powerful metaphor for phase transitions. In mathematics, the P vs. NP problem has joined the list of the most profound and consequential problems, and in economics considerations of computational complexity revise predictions of economic behavior and affect the design of economic mechanisms such as auctions. Finally, in biology some of the most fundamental problems, such as understanding the brain and evolution, can be productively recast in computational terms. My talk is structured around eight vignettes exemplifying this pattern.

MyLifeBits: A Project to Implement Memex

Date and Time
Wednesday, October 15, 2003 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Lecture Series
Speaker
Gordon Bell, from Microsoft
Host
Kai Li
The MyLifeBits project aims to put all personal documents and media online. We have been capturing and storing personal articles, books, correspondence (letters and email, CDs, memos, papers, photos, pictures, presentations, movies, videotaped lectures, telephone calls, and all web pages visited. We have built a system to support MyLifeBits, beginning with a Server that supports capture, storage, and management of personal media, including: TV with Web enhancement, radio, personal music collections, and home video. Such a project potentially includes everything from ensuring that this information will be readable in the future to security. The user interface issues are many, and highly dependent on the various applications that are needed to make the data valuable to current and future use.

Why is Graphics Hardware So Fast?

Date and Time
Wednesday, December 5, 2001 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Lecture Series
Speaker
Pat Hanrahan, from Stanford University
Host
Thomas Funkhouser
Recently NVIDIA has claimed that their graphics processors (or GPUs) are improving at a rate three times faster than Moore's Law. For many years the performance of SGI graphics workstations was increasing at roughly 75% per year. The result is that the latest generation of commodity graphics and game chips are much faster than the main processor. The quoted performances range from 50-100 gigaflops and to approximately 1 tera-8bit-ops. This increase in performance comes along with additional functionality. The most recent innovation is programmable vertex and fragment stages that allow them to compute a wide range of new effects. Why are these graphics processors so fast? Will the future performance of GPUs continue to increase faster than CPUs? And, if so, what are the implications for computing?

Immersion and Tele-immersion in the Office of the Future

Date and Time
Wednesday, November 28, 2001 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Lecture Series
Speaker
Henry Fuchs, from University of North Carolina at Chapel Hill
Host
Thomas Funkhouser
We envision an Office of the Future in which images are displayed on walls and other surfaces to provide an immersive environment with a sense of common presence among local and distant participants and their shared work objects. Much of the tele-immersion portion of our research has been part of a four-site National Tele-Immersion Initiative (NTII) lead by Jaron Lanier, chief scientist of the principal funder, Advanced Network & Services. In the current, primitive implementation, we at NTII use clusters of seven digital cameras to acquire the changing 3D surface of each remote partner. These live 3D images are merged with (presently, pre-acquired) 3D scans of each remote office and the common work objects and all these are displayed in head-tracked stereo on walls of the local office. UNC's Office of the Future project and tele-collaboration has also been part of the 5-site (USA) NSF Science and Technology Center in Graphics and Scientific Visualization. Related efforts include panoramic acquisition by clusters of cameras, new image-based rendering methods, and wide-area displays ("video walls") built with numerous casually-placed ceiling-mounted projectors that are automatically calibrated by multiple cameras. We hope these efforts will improve today's ubiquitous personal computers so they will no longer be so restricted by their desktop monitors, but will emerge to integrate smoothly with their user's 3D physical work environment.

Animating with Simulation

Date and Time
Monday, November 19, 2001 - 3:30pm to 5:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Lecture Series
Speaker
Jessica Hodgins, from Carnegie Mellon University
Host
Thomas Funkhouser
Computer animations and virtual environments both require a source of motion for characters and objects in the environment. We are exploring simulation as a possible solution to this problem. For characters, this solution requires applying control algorithms to physically realistic models of the systems that we would like to animate. By using these techniques to simulate humans, we are working towards avatars that are responsive to the user's subtle gestures and interactive agents that respond appropriately to events in a virtual environment. For example, we developed control algorithms that allow rigid body models to run or bicycle, bounce on a trampoline, and perform a handspring vault. Recently, we have begun to combine simulations with motion capture data in the hope that these techniques will benefit both from the physical realism of simulation and from the humanlike motion provided by captured data. We are using human motion data to inform the construction of control systems and to construct interfaces for avatars.

Art, Computer Graphics, and Perception

Date and Time
Wednesday, November 14, 2001 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Lecture Series
Speaker
Spike Hughes, from Brown University
Host
Thomas Funkhouser
Art, in various forms, has been around for about 50,000 years. Computer Graphics has been around for about 50. Both aim (at least in part) to create pictures, often for the purpose of generating some reaction in the mind of the viewer. Graphics, with its late start, and hence the advantage of reusing prior methods, should be a strong competitor, but so far art seems to be winning: artists, with a few pen strokes, can create a more vivid and lasting impression than can graphics programs using millions of pixels. That's because until a decade ago researchers in graphics had taken very little from art; arguably the primary knowledge transfer was the idea and mathematics of perspective. In this talk, I'll discuss an idiosyncratic view of some basic techniques of art: that these tehcniques involve ``spoofing'' of the human visual system, and that researchers in graphics, by understanding how this is done, can make better pictures based on an understanding of human perception. I'll illustrate with examples from recent research both from Brown and from other places. I'll also discuss a less obvious notion --- that using ideas from art, we may be able to improve not just \emph{output} techniques, but \emph{input} techniques as well.

Evolution of Graphics Architectures

Date and Time
Monday, November 5, 2001 - 3:30pm to 5:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Lecture Series
Speaker
Turner Whitted, from Microsoft Research
Host
Thomas Funkhouser
Graphics processors are smaller, faster, and more full-featured today than even Moore's law predicts they should be. In spite of this dramatic increase in performance, the basic elements of graphics hardware have remained the same for over 30 years. Polygon transformation units coupled to texture mapping rasterizers have undergone substantial embellishment, but no fundamental structural changes in all this time. Recent advances in display algorithms and fundamental changes in graphics representation, most notably the popularity of image-based rendering, have sparked a re-examination of the functions of graphics hardware. This talk describes some initial attempts to re-invent the graphics display pipeline, to modernize it features, and to make its power available to a broader set of imaging operations. Bio: see http://www.research.microsoft.com/users/jtw

Theory and Practice for Fair Electronic Exchange

Date and Time
Wednesday, October 24, 2001 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Lecture Series
Speaker
Silvio Micali, from MIT
Host
Amit Sahai
Assume each of two parties has something, deliverable electronically, the other wants. Then, a fair electronic exchange is a protocol guaranteeing that either both parties get what they want, or none of them does. (E.g., in certified e-mail, the Recipient should get the Sender's mail if and only if the Sender gets the Recipient's receipt.) Protocols relying on traditional trusted parties easily guarantee such exchanges, but are inefficient (because a trusted party must be part of every execution) and expensive (because trusted parties want to be paid for each execution). Merging Theory and Practice we show how to implement trusted parties in an INVISIBLE WAY, so as to provide fair exchange protocols that are more efficient and much more economical than traditional ones.

Digital Geometry Processing

Date and Time
Thursday, October 18, 2001 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Lecture Series
Speaker
Wim Sweldens, from Lucent Technologies
Host
Kenneth Steiglitz
Digital Geometry, large polygonal meshes coming from digitizing of complex geometry, is the fourth wave of multimedia after sound, images, and video. The basic idea behind digital geometry processing is to bring the entire suite of standard signal processing algorithms such as editing, filtering, and compression to digital geometry. This is challenging because unlike sound, images, and video, geometry is not defined on a Euclidean space and traditional Fourier based techniques no longer apply. Instead we propose a new paradigm based on so-called semi-regular meshes formed by recursive subdivision and local displacements. We show how parameterizations can be used to build semi-regular meshes, how they are almost perfectly suited for compression, and how they can be used in editing and filtering operations. We discuss extensions which deal with sets of meshes and with topology changes. Finally we show some reent theoretical results on regularity, stability, and approximation quality.
Follow us: Facebook Twitter Linkedin