Quick links

Distinguished Colloquium Series Speaker

Interactive Data Analysis: Visualization and Beyond

Date and Time
Monday, November 6, 2017 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Prof. Adam Finkelstein

Professor Jeffrey Heer, Associate Professor of Computer Science and Engineering at the University of Washington.
Data analysis is a complex process with frequent shifts among data formats, tools and models, as well as between symbolic and visual thinking. How might the design of improved tools accelerate people's exploration and understanding of data? Covering both interactive demos and principles from academic research, this talk will examine how to craft a careful balance of interactive and automated methods, combining concepts from data visualization, machine learning, and computer systems to design novel interactive analysis tools.

Jeffrey Heer is an Associate Professor of Computer Science & Engineering at the University of Washington, where he directs the Interactive Data Lab and conducts research on data visualization, human-computer interaction and social computing. The visualization tools developed by Jeff and his collaborators (Vega, D3.js, Protovis, Prefuse) are used by researchers, companies, and thousands of data enthusiasts around the world. Jeff’s research papers have received awards at the premier venues in Human-Computer Interaction and Visualization (ACM CHI, UIST, CSCW; IEEE InfoVis, VAST, EuroVis). Other honors include MIT Technology Review’s TR35 (2009), a Sloan Fellowship (2012), and the ACM Grace Murray Hopper Award (2016). Jeff holds B.S., M.S., and Ph.D. degrees in Computer Science from UC Berkeley, whom he then betrayed to join the Stanford faculty (2009–2013). He is also a co-founder of Trifacta, a provider of interactive tools for scalable data transformation.

 

Emotion Tracking for Health and Wellbeing

Date and Time
Monday, November 20, 2017 - 12:30pm to 1:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Prof. Szymon Rusinkiewicz

Dr. Mary Czerwinski
Affective computing is emerging as an important field in the design of emotional, intelligent, conversational agents that can be used in the healthcare arena, but also in everyday life.  In addition, ubiquitous recording, both in the field and in the doctor's office/patient's home, has influenced how we think about wellbeing in the future.  In our research, we use sensing technologies to develop contextualized and precise delivery of interventions, both in terms of the content and in the timing the intervention delivery, using machine learning algorithms.  I will discuss how we use affective computing technologies to deliver just in time health interventions for improved health and for personal, behavioral reflection. For example, I will describe the Entendre project, which has implications for the design of visual feedback to encourage empathic patient-centered communication. I will also talk about ParentGuardian, a wearable sensing system that delivers just in time interventions to parents with ADHD children.  In addition, I'll present our findings from two applications that deliver interventions and skills from psychology for coping with conditions ranging from general stress and depression to serious mental illness, like the intent to commit suicide, using conversational agents that users trust. Finally, I'll briefly touch on some of our designs for helping users to reflect on their daily behaviors in order to improve general well-being.

Bio:  Dr. Mary Czerwinski is a Principle Researcher and Research Manager of the Visualization and Interaction (VIBE) Research Group. Mary's latest research focuses primarily on emotion tracking and intervention design and delivery, information worker task management and health and wellness for individuals and groups. Her research background is in visual attention and multitasking. She holds a Ph.D. in Cognitive Psychology from Indiana University in Bloomington. Mary was awarded the ACM SIGCHI Lifetime Service Award, was inducted into the CHI Academy, and became an ACM Distinguished Scientist in 2010. She also received the Distinguished Alumni award from Indiana University's Brain and Psychological Sciences department in 2014. Mary became a Fellow of the ACM in 2016. More information about Dr. Czerwinski can be found at her website here.

Real Humans, Simulated Attacks: Usability Testing with Attack Scenarios

Date and Time
Monday, November 13, 2017 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Dr. Marshini Chetty
Professor Lorrie Faith Cranor
User studies are critical to understanding how users perceive and interact with security and privacy software and features. While it is important that users be able to configure and use security tools when they are not at risk, it is even more important that the tools continue to protect users during an attack. Conducting user studies in the presence of (simulated) risk is complicated. We would like to observe how users behave when they are actually at risk, but at the same time we cannot harm user study participants or subject them to increased risk. Often the risky situations we are interested in occur relatively infrequently in the real world, and thus can be difficult to observe in the wild. Researchers use a variety of strategies to overcome these challenges and place participants in situations where they will believe their security or privacy is at risk, without subjecting them to increases in actual harm.  In some studies, researchers recruit participants to perform real tasks not directly related to security so that they can observe how participants respond to simulated security-related prompts or cues that occur while users are focused on primary tasks. In other studies, researchers create a hypothetical scenario and try to get participants sufficiently engaged in it that they will be motivated to avoid simulated harm. Sometimes researchers have the opportunity to observe real, rather than simulated attacks, although these opportunities are usually difficult to come by. Researchers can monitor real world user behavior over long periods of time (in public or with permission of participants) and observe how users respond to risks that occur naturally, without researcher intervention. In this talk I will motivate the importance of security user studies and talk about a number of different user study approaches we have used at the CyLab Usable Privacy and Security Lab at Carnegie Mellon University.
 
Lorrie Faith Cranor is a Professor of Computer Science and of Engineering and Public Policy at Carnegie Mellon University where she is director of the CyLab Usable Privacy and Security Laboratory (CUPS). She is associate department head of the Engineering and Public Policy Department and co-director of the MSIT-Privacy Engineering masters program. In 2016 she served as Chief Technologist at the US Federal Trade Commission, working in the office of Chairwoman Ramirez. She is also a co-founder of Wombat Security Technologies, Inc, a security awareness training company. She has authored over 150 research papers on online privacy, usable security, and other topics. She has played a key role in building the usable privacy and security research community, having co-edited the seminal book Security and Usability (O'Reilly 2005) and founded the Symposium On Usable Privacy and Security (SOUPS). She also chaired the Platform for Privacy Preferences Project (P3P) Specification Working Group at the W3C and authored the book Web Privacy with P3P (O'Reilly 2002). She has served on a number of boards, including the Electronic Frontier Foundation Board of Directors, and on the editorial boards of several journals. In her younger days she was honored as one of the top 100 innovators 35 or younger by Technology Review magazine. More recently she was elected to the ACM CHI Academy, named an ACM Fellow for her contributions to usable privacy and security research and education, and named an IEEE Fellow for her contributions to privacy engineering. She was previously a researcher at AT&T-Labs Research and taught in the Stern School of Business at New York University. She holds a doctorate in Engineering and Policy from Washington University in St. Louis. In 2012-13 she spent her sabbatical as a fellow in the Frank-Ratchye STUDIO for Creative Inquiry at Carnegie Mellon University where she worked on fiber arts projects that combined her interests in privacy and security, quilting, computers, and technology. She practices yoga, plays soccer, and runs after her three children.
 

From On Body to Out of Body User Experience

Date and Time
Friday, December 1, 2017 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Prof. Olga Russakovsky

Professor James A. Landay

Today’s most common user interfaces represent an incremental change from the GUI popularized by the Apple Macintosh in 1984. Over the last 30 years the dominant hardware has changed drastically while the user interface has barely moved: from one hand on a mouse to two fingers on a panel of glass. I will illustrate how we are building on-body interfaces of the future that further engage our bodies by using muscle sensing for input and vibrotactile output, offering discrete and natural interaction on the go. I will also show how other interfaces we are designing take an even more radical approach, moving the interface off the human body altogether and onto drones that project into the space around them. Finally, I will introduce a new project where we envision buildings as hybrid physical-digital spaces that both sense and actuate to improve human wellbeing.

Bio: James Landay is a Professor of Computer Science and the Anand Rajaraman and Venky Harinarayan Professor in the School of Engineering at Stanford University. He specializes in human-computer interaction. He is the founder and co-director of the World Lab, a joint research and educational effort with Tsinghua University in Beijing. Previously, Landay was a Professor of Information Science at Cornell Tech in New York City and prior to that he was a Professor of Computer Science & Engineering at the University of Washington and a Professor in EECS at UC Berkeley. From 2003 through 2006 he was the Laboratory Director of Intel Labs Seattle, a university affiliated research lab that explored the new usage models, applications, and technology for ubiquitous computing. He was also the chief scientist and co-founder of NetRaker, which was acquired by KeyNote Systems in 2004. Landay received his BS in EECS from UC Berkeley in 1990, and MS and PhD in Computer Science from Carnegie Mellon University in 1993 and 1996, respectively. He is a member of the ACM SIGCHI Academy and he is an ACM Fellow.

CorfuDB: Transactional Data Services over a Shared Log

Date and Time
Tuesday, November 18, 2014 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Speaker
Dahlia Malkhi, from until recently: Microsoft Research, Silicon Valley
Host
Michael Freedman

Dahlia Malkhi

Dahlia Malkhi

Conventional wisdom has it that the only way to scale replicated services is by partitioning the data. What would you do if given an infrastructure that breaks the seeming tradeoff between consistency and scale?
 
The talk will describe our experience with building CorfuDB, a distributed fabric that drives consistency and transactional guarantees at high-throughput. CorfuDB facilitates building distributed services in which in-memory data-structures are backed by a shared log. The core is built around the CORFU log, which clients can append-to and read-from over a network. Internally, CORFU is distributed over a cluster of machines with no single I/O bottleneck to either appends or reads. Atop CORFU is Tango, a fabric for programming transactional data services such as the Hyder DB and an Apache-ZooKeeper alternative.

Dahlia Malkhi has been a principal researcher at Microsoft Research, Silicon Valley from 2004 until the lab shut down in Sep 2014.  She works on applied and foundational research in reliability, consistency and security of distributed computing since the early nineties.
 
Prior to joining Microsoft Research, Dr. Malkhi was an associate professor at the Hebrew University of Jerusalem (1999-2007), left for a brief sabbatical, but was bitten by the Silicon Valley bug and remained at Microsoft. She holds a PhD (1994), M.Sc and B.Sc in computer science from the Hebrew University of Jerusalem, making her the only CS faculty to return to the Hebrew U. for all four academic stages. Dr. Malkhi was elected an ACM fellow in 2011, received the IBM Faculty award in 2003 and 2004 and the German-Israeli Foundation (G.I.F.) Young Scientist award in 2002. She serves on the editorial boards of the IEEE Transactions of Dependable and Secure Computing and of the Distributed Computing Journal. She chaired LADIS 2012 , Locality 2007, PODC 2006, Locality 2005 and DISC 2002.

What Google Glass means for the future of photography

Date and Time
Monday, October 20, 2014 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Jianxiong Xiao

Marc Levoy

Marc Levoy

Although head-mounted cameras (and displays) are not new, Google Glass has the potential to make these devices commonplace.  This has implications for the practice, art, and uses of photography.  So what's different about doing photography with Glass?  First, Glass doesn't work like a conventional camera; it's hands-free, point-of-view, always available, and instantly triggerable. Second, Glass facilitates different  uses than a conventional camera: recording documents, making visual todo lists, logging your life, and swapping eyes with other Glass users.  Third, Glass will be an open platform, unlike most cameras.

This is not easy, because Glass is a heterogeneous computing platform, with multiple processors having different performance, efficiency, and programmability.  The challenge is to invent software abstractions that allow control over the camera as well as access to these specialized processors. Finally, devices like Glass that are head-mounted and perform computational photography in real time have the potential to give wearers "superhero vision", like seeing in the dark, or magnifying subtle motion or changes.  If such devices can also perform computer vision in real time and are connected to the cloud, then they can do face recognition, live language translation, and information recall.  The hard part is not imagining these capabilities, but deciding which ones are feasible, useful, and socially acceptable.

Marc Levoy is the VMware Founders Professor of Computer Science, Emeritus. Education: B. Architecture and M.S. from Cornell (1976,1978), PhD in Computer Science from University of North Carolina (1989).  In previous lives he worked on computer-assisted cartoon animation (1970s), volume rendering (1980s), 3D scanning (1990s), and computational photography (2000s), including light field photography and microscopy.  At Stanford he taught computer graphics, digital photography, and the science of art.  Outside of academia, Levoy co-designed the Google book scanner, launched Google's Street View project, and currently leads a team in GoogleX working on Project Glass and the Nexus 5's HDR+ mode. Awards: Charles Goodwin Sands Medal for best undergraduate thesis (1976), National Science Foundation Presidential Young Investigator (1991), ACM SIGGRAPH Computer Graphics Achievement Award (1996), ACM Fellow (2007).

The Unreasonable Effectiveness of Deep Learning

Date and Time
Wednesday, October 22, 2014 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Jianxiong Xiao

Yann LeCun

Yann LeCun

The emergence of large datasets, parallel computers, and new machine learning methods, have enabled the deployment of highly-accurate computer perception systems and are opening the door to a wide deployment of AI systems.

A key component in AI systems is a module, sometimes called a feature extractor, that turns raw inputs into suitable internal representations. But designing and building such a module requires a considerable amount of engineering efforts and domain expertise.

Deep Learning methods have provided a way to automatically learn good representations of data from labeled or unlabeled samples. Deep architectures are composed of successive stages in which data representations are increasingly global, abstract, and invariant to irrelevant transformations of the input. Deep learning enables end-to-end training of these architectures, from raw inputs to ultimate outputs.

The convolutional network model (ConvNet) is a particular type of deep architecture somewhat inspired by biology, which consists of multiple stages of filter banks, interspersed with non-linear operators, and spatial pooling. ConvNets have become the record holder for a wide variety of benchmarks, including object detection, localization and recognition in image, semantic segmentation and labeling, face recognition, acoustic modeling for speech recognition, drug design, handwriting recognition, biological image segmentation, etc.

The most recent systems deployed by Facebook, Google, NEC, IBM, Microsoft, Baidu, Yahoo and others for image understanding, speech recognition, and natural language processing use deep learning. Many of these systems use very large and very deep ConvNets with billions of connections, trained in supervised mode. But many new applications require the use of unsupervised feature learning. A number of such methods based on sparse auto-encoder will be presented.

Several applications will be shown through videos and live demos, including a category-level object recognition system that can be trained on the fly, a scene parsing system that can label every pixel in an image with the category of the object it belongs to (scene parsing), an object localization and detection system, and several natural language processing systems. Specialized hardware architectures that run these systems in real time will also be described.

Yann LeCun is Director of AI Research at Facebook, and Silver Professor of Data Science, Computer Science, Neural Science, and Electrical Engineering at New York University, affiliated with the NYU Center for Data Science, the Courant Institute of Mathematical Science, the Center for Neural Science, and the Electrical and Computer Engineering Department.

He received the Electrical Engineer Diploma from Ecole Supérieure d'Ingénieurs en Electrotechnique et Electronique (ESIEE), Paris in 1983, and a PhD in Computer Science from Université Pierre et Marie Curie (Paris) in 1987. After a postdoc at the University of Toronto, he joined AT&T Bell Laboratories in Holmdel, NJ in 1988. He became head of the Image Processing Research Department at AT&T Labs-Research in 1996, and joined NYU as a professor in 2003, after a brief period as a Fellow of the NEC Research Institute in Princeton. From 2012 to 2014 he directed NYU's initiative in data science and became the founding director of the NYU Center for Data Science. He was named Director of AI Research at Facebook in late 2013 and retains a part-time position on the NYU faculty.

His current interests include AI, machine learning, computer perception, mobile robotics, and computational neuroscience.  He has published over 180 technical papers and book chapters on these topics as well as on neural networks, handwriting recognition, image processing and compression, and on dedicated circuits and architectures for computer perception. The character recognition technology he developed at Bell Labs is used by several banks around the world to read checks and was reading between 10 and 20% of all the checks in the US in the early 2000s.  His image compression technology, called DjVu, is used by hundreds of web sites and publishers and millions of users to access scanned documents on the Web. Since the mid 1980's he has been working on deep learning methods, particularly the convolutional network model, which is the basis of many products and services deployed by companies such as Facebook, Google, Microsoft, Baidu, IBM, NEC, AT&T and others for image and video understanding, document recognition, human-computer interaction, and speech recognition.

LeCun has been on the editorial board of IJCV, IEEE PAMI, and IEEE Trans. Neural Networks, was program chair of CVPR'06, and is chair of ICLR 2013 and 2014. He is on the science advisory board of Institute for Pure and Applied Mathematics, and Neural Computation and Adaptive Perception Program of the Canadian Institute for Advanced Research. He has advised many large and small companies about machine learning technology, including several startups he co-founded. He is the lead faculty at NYU for the Moore-Sloan Data Science Environment, a $36M initiative in collaboration with UC Berkeley and University of Washington to develop data-driven methods in the sciences. He is the recipient of the 2014 IEEE Neural Network Pioneer Award.

Temporal Dynamics and Information Retrieval

Date and Time
Tuesday, November 5, 2013 - 4:30pm to 5:30pm
Location
Friend Center 006
Type
Distinguished Colloquium Series Speaker
Host
David Blei
Many digital resources, like the Web, are dynamic and ever-changing collections of information. However, most tools developed for interacting with Web content, such as browsers and search engines, focus on a single static snapshot of the information. In this talk, I will present analyses characterizing how Web content changes over time, how people re-visit Web pages over time, and how re-visitation patterns are influenced by changes in user intent and content. These results have implications for many aspects of information management including crawling policy, ranking and information extraction algorithms, result presentation, and system evaluation. I will describe a prototype that supports people in understanding how the information they interact with changes over time, and new information retrieval models that incorporate the temporal dynamics to improve ranking. Finally, I will conclude with speculations about "slow search" and an overview of challenges that need to be addressed to fully incorporate temporal dynamics into information systems.

Susan Dumais is a Distinguished Scientist and manager of the Context, Learning and User Experience for Search (CLUES) Group at Microsoft Research. Prior to joining Microsoft Research, she was at Bell Labs and Bellcore for many years, where she worked on Latent Semantic Analysis, interfaces for combining search and navigation, and organizational impacts of new technology. Her current research focuses on user modeling and personalization, context and search, temporal dynamics of information, and novel evaluation methods. She has worked closely with several Microsoft groups (Bing, Windows Desktop Search, SharePoint, and Office Online Help) on search-related innovations. Susan has published widely in the fields of information science, human-computer interaction and cognitive science, and holds several patents on novel retrieval algorithms and interfaces. Susan is an adjunct professor in the Information School at the University of Washington. She is Past-Chair of ACM's Special Interest Group in Information Retrieval (SIGIR), and serves on several editorial boards, technical program committees, and government panels. She was elected to the CHI Academy in 2005, an ACM Fellow in 2006, received the SIGIR Gerard Salton Award for Lifetime Achievement in 2009, and was elected to the National Academy of Engineering (NAE) in 2011. More information is available at her homepage, http://http://research.microsoft.com/en-us/um/people/sdumais/

On the Computational and Statistical Interface and "Big Data"

Date and Time
Wednesday, October 16, 2013 - 4:30pm to 5:30pm
Location
Friend Center 006
Type
Distinguished Colloquium Series Speaker
Host
David Blei
The rapid growth in the size and scope of datasets in science and technology has created a need for novel foundational perspectives on data analysis that blend the statistical and computational sciences. That classical perspectives from these fields are not adequate to address emerging problems in "Big Data" is apparent from their sharply divergent nature at an elementary level---in computer science, the growth of the number of data points is a source of "complexity" that must be tamed via algorithms or hardware, whereas in statistics, the growth of the number of data points is a source of "simplicity" in that inferences are generally stronger and asymptotic results can be invoked. Indeed, if data are a data analyst's principal resource, why should more data be burdensome in some sense? Shouldn't it be possible to exploit the increasing inferential strength of data at scale to keep computational complexity at bay? I present three research vignettes that pursue this theme, the first involving the deployment of resampling methods such as the bootstrap on parallel and distributed computing platforms, the second involving large-scale matrix completion, and the third introducing a methodology of "algorithmic weakening," whereby hierarchies of convex relaxations are used to control statistical risk as data accrue.

Joint work with Venkat Chandrasekaran, Ariel Kleiner, Lester Mackey, Purna Sarkar, and Ameet Talwalkar.

Michael I. Jordan is the Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley. His research interests bridge the computational, statistical, cognitive and biological sciences, and have focused in recent years on Bayesian nonparametric analysis, probabilistic graphical models, spectral methods, kernel machines and applications to problems in statistical genetics, signal processing, natural language processing and distributed computing systems. Prof. Jordan is a member of the National Academy of Sciences, a member of the National Academy of Engineering and a member of the American Academy of Arts and Sciences. He is a Fellow of the American Association for the Advancement of Science. He has been named a Neyman Lecturer and a Medallion Lecturer by the Institute of Mathematical Statistics, and has received the ACM/AAAI Allen Newell Award. He is a Fellow of the AAAI, ACM, ASA, CSS, IMS, IEEE and SIAM.

Algorithms, Graph Theory, and the Solution of Laplacian Linear Equations

Date and Time
Thursday, November 29, 2012 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Mark Braverman
We survey several fascinating concepts and algorithms in graph theory that arise in the design of fast algorithms for solving linear equations in the Laplacian matrices of graphs. We will begin by explaining why linear equations in these matrices are so interesting.

The problem of solving linear equations in these matrices motivates a new notion of what it means for one graph to approximate another. This leads to the problem of approximating graphs by sparse graphs. Our algorithms for solving Laplacian linear equations will exploit surprisingly strong approximations of graphs by sparse graphs, and even by trees.

We will survey the roles that spectral graph theory, random matrix theory, graph sparsification, low-stretch spanning trees and local clustering algorithms play in the design of fast algorithms for solving Laplacian linear equations.

Daniel Alan Spielman received his B.A. in Mathematics and Computer Science from Yale in 1992, and his Ph.D in Applied Mathematics from M.I.T. in 1995. He spent a year as a NSF Mathematical Sciences Postdoc in the Computer Science Department at U.C. Berkeley, and then taught in the Applied Mathematics Department at M.I.T. until 2005. Since 2006, he has been a Professor at Yale University. He is presently the Henry Ford II Professor of Computer Science, Mathematics, and Applied Mathematics.

He has received many awards, including the 1995 ACM Doctoral Dissertation Award, the 2002 IEEE Information Theory Paper Award, the 2008 Godel Prize, the 2009 Fulkerson Prize, the 2010 Nevanlinna Prize, an inaugural Simons Invesigator Award, and a MacArthur Fellowship. He is a Fellow of the Association for Computing Machinery and a member of the Connecticut Academy of Science and Engineering. His main research interests include the design and analysis of algorithms, graph theory, machine learning, error-correcting codes and combinatorial scientific computing.

Follow us: Facebook Twitter Linkedin