Quick links

Distinguished Colloquium Series Speaker

A Case for An Open Source CS Curriculum

Date and Time
Thursday, December 6, 2018 - 4:30pm to 5:30pm
Location
Computer Science Large Auditorium (Room 104)
Type
Distinguished Colloquium Series Speaker
Host
Wyatt Lloyd

Thomas Anderson
Despite rapidly increasing enrollment in CS courses, the academic CS community is failing to keep pace with demand for trained CS students, leading to escalating starting salaries for our students. Further, the knowledge of how to teach students up to the state of the art is increasingly segregated into a small cohort of schools who mostly cater to students from families in the top 10% of the income distribution.

Even in the best case, those schools lack the aggregate capacity to teach more than a small fraction of the nation's need for engineers and computer scientists.  MOOCs can help, but they are mainly effective at retraining existing college graduates. In practice, most low and middle income students need a human teacher. In this talk I argue for building an open source CS curriculum, with autograded projects, instructional software, textbooks, and slideware, as an aid for teachers who want to improve the education in advanced CS topics at schools attended by the children of the 90%. I will give as an example our work on replicating teaching advanced operating systems and distributed systems.

Bio:
Tom Anderson is the Warren Francis and Wilma Kolm Bradley Chair in the Paul G. Allen School of Computer Science and Engineering at the University of Washington. His research interests span all aspects of building practical, robust, and efficient computer systems, including distributed systems, operating systems, computer networks, multiprocessors, and security. He is a member of the National Academy of Engineering and the American Academy of Arts and Sciences, as well as winner of the USENIX Lifetime Achievement Award, the USENIX STUG Award, the IEEE Koji Kobayashi Computer and Communications Award, the ACM SIGOPS Mark Weiser Award, and the IEEE Communications Society William R. Bennett Prize. He is also an ACM Fellow, past program chair of SIGCOMM and SOSP, and he has co-authored twenty-one award papers and one widely used undergraduate textbook.

High Performance Operating Systems in the Data Center

Date and Time
Thursday, December 6, 2018 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Wyatt Lloyd

Thomas Anderson
The ongoing shift of enterprise computing to the cloud provides an opportunity to rethink operating systems for this new setting.  I will discuss two specific technologies, kernel bypass for high performance networking and low latency non-volatile storage, and their implications for operating system design. In each case, delivering the performance of the underlying hardware requires novel approaches to the division of labor between hardware, the operating system kernel, and the application library. 

Bio:
Tom Anderson is the Warren Francis and Wilma Kolm Bradley Chair in the Paul G. Allen School of Computer Science and Engineering at the University of Washington. His research interests span all aspects of building practical, robust, and efficient computer systems, including distributed systems, operating systems, computer networks, multiprocessors, and security. He is a member of the National Academy of Engineering and the American Academy of Arts and Sciences, as well as winner of the USENIX Lifetime Achievement Award, the USENIX STUG Award, the IEEE Koji Kobayashi Computer and Communications Award, the ACM SIGOPS Mark Weiser Award, and the IEEE Communications Society William R. Bennett Prize. He is also an ACM Fellow, past program chair of SIGCOMM and SOSP, and he has co-authored twenty-one award papers and one widely used undergraduate textbook.

Lunch will be available to talk attendees at 12:00pm.

Lost in Translation: Production Code Efficiency

Date and Time
Tuesday, December 4, 2018 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Speaker
Andrew V. Goldberg, from Amazon.com
Host
Robert Tarjan

When software engineers re-implement a high-performance research prototype code, one often observes one to two orders of magnitude drop in performance. This holds even if both implementations use the same language. The gap is even wider when one goes from a simple language (e.g., C++) to a more sophisticated one (e.g., Java).

One of the root causes of this phenomenon is the misinterpretation by software engineers of what they learn in school. Theoretical computer scientists ignore constant factors for the sake of machine-independent analysis. Programming language researchers focus on compilers that automatically handle low-level OS and architectural issues such as memory management. Software engineering professors emphasize abstraction and re-usability. Many software engineers learn to ignore constant factors, rely on compilers for the low-level efficiency, and use generic primitives for re-usability. This is tempting to do as one has to worry about fewer issues when coding, and one needs to know fewer primitives and data structures.

However, in practice constant factors do matter, compilers do not always take advantage of computer architecture features, and generic primitives may be less efficient than the simple ones sufficient for the task. Ignoring these issues can lead to significant loss of computational efficiency and increased memory consumption. Power consumption also increases significantly.

In this talk we give several examples of inefficient program fragments and discuss them. These examples show that software engineers need to pay attention to low-level details when choosing data .

Andrew Goldberg
Bio:
Andrew Goldberg is a Senior Principal Scientist at Amazon.com, Inc. His research interests are in design, analysis, and computational evaluation of algorithms and data structures, algorithm engineering, computational game theory, electronic commerce, and parallel and distributed algorithms, and complexity theory. His algorithms are widely used in industry and academia. Goldberg got his Ph.D. degree in Computer Science from MIT in 1987, where he was a Hertz Foundation Fellow. Before joining Amazon in 2014, he worked at GTE Laboratories, Stanford University, NEC Research Institute, InterTrust Technologies, Inc., and Microsoft Research. Goldberg received a number of awards for his research contributions, including the NSF Presidential Young Investigator Award and the ONR Young Investigator Award. He is a Fellow of ACM and SIAM.

Lunch available to talk attendees at 12:00pm

Interactive Data Analysis: Visualization and Beyond

Date and Time
Monday, November 6, 2017 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Prof. Adam Finkelstein

Professor Jeffrey Heer, Associate Professor of Computer Science and Engineering at the University of Washington.
Data analysis is a complex process with frequent shifts among data formats, tools and models, as well as between symbolic and visual thinking. How might the design of improved tools accelerate people's exploration and understanding of data? Covering both interactive demos and principles from academic research, this talk will examine how to craft a careful balance of interactive and automated methods, combining concepts from data visualization, machine learning, and computer systems to design novel interactive analysis tools.

Jeffrey Heer is an Associate Professor of Computer Science & Engineering at the University of Washington, where he directs the Interactive Data Lab and conducts research on data visualization, human-computer interaction and social computing. The visualization tools developed by Jeff and his collaborators (Vega, D3.js, Protovis, Prefuse) are used by researchers, companies, and thousands of data enthusiasts around the world. Jeff’s research papers have received awards at the premier venues in Human-Computer Interaction and Visualization (ACM CHI, UIST, CSCW; IEEE InfoVis, VAST, EuroVis). Other honors include MIT Technology Review’s TR35 (2009), a Sloan Fellowship (2012), and the ACM Grace Murray Hopper Award (2016). Jeff holds B.S., M.S., and Ph.D. degrees in Computer Science from UC Berkeley, whom he then betrayed to join the Stanford faculty (2009–2013). He is also a co-founder of Trifacta, a provider of interactive tools for scalable data transformation.

 

Emotion Tracking for Health and Wellbeing

Date and Time
Monday, November 20, 2017 - 12:30pm to 1:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Prof. Szymon Rusinkiewicz

Dr. Mary Czerwinski
Affective computing is emerging as an important field in the design of emotional, intelligent, conversational agents that can be used in the healthcare arena, but also in everyday life.  In addition, ubiquitous recording, both in the field and in the doctor's office/patient's home, has influenced how we think about wellbeing in the future.  In our research, we use sensing technologies to develop contextualized and precise delivery of interventions, both in terms of the content and in the timing the intervention delivery, using machine learning algorithms.  I will discuss how we use affective computing technologies to deliver just in time health interventions for improved health and for personal, behavioral reflection. For example, I will describe the Entendre project, which has implications for the design of visual feedback to encourage empathic patient-centered communication. I will also talk about ParentGuardian, a wearable sensing system that delivers just in time interventions to parents with ADHD children.  In addition, I'll present our findings from two applications that deliver interventions and skills from psychology for coping with conditions ranging from general stress and depression to serious mental illness, like the intent to commit suicide, using conversational agents that users trust. Finally, I'll briefly touch on some of our designs for helping users to reflect on their daily behaviors in order to improve general well-being.

Bio:  Dr. Mary Czerwinski is a Principle Researcher and Research Manager of the Visualization and Interaction (VIBE) Research Group. Mary's latest research focuses primarily on emotion tracking and intervention design and delivery, information worker task management and health and wellness for individuals and groups. Her research background is in visual attention and multitasking. She holds a Ph.D. in Cognitive Psychology from Indiana University in Bloomington. Mary was awarded the ACM SIGCHI Lifetime Service Award, was inducted into the CHI Academy, and became an ACM Distinguished Scientist in 2010. She also received the Distinguished Alumni award from Indiana University's Brain and Psychological Sciences department in 2014. Mary became a Fellow of the ACM in 2016. More information about Dr. Czerwinski can be found at her website here.

Real Humans, Simulated Attacks: Usability Testing with Attack Scenarios

Date and Time
Monday, November 13, 2017 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Dr. Marshini Chetty
Professor Lorrie Faith Cranor
User studies are critical to understanding how users perceive and interact with security and privacy software and features. While it is important that users be able to configure and use security tools when they are not at risk, it is even more important that the tools continue to protect users during an attack. Conducting user studies in the presence of (simulated) risk is complicated. We would like to observe how users behave when they are actually at risk, but at the same time we cannot harm user study participants or subject them to increased risk. Often the risky situations we are interested in occur relatively infrequently in the real world, and thus can be difficult to observe in the wild. Researchers use a variety of strategies to overcome these challenges and place participants in situations where they will believe their security or privacy is at risk, without subjecting them to increases in actual harm.  In some studies, researchers recruit participants to perform real tasks not directly related to security so that they can observe how participants respond to simulated security-related prompts or cues that occur while users are focused on primary tasks. In other studies, researchers create a hypothetical scenario and try to get participants sufficiently engaged in it that they will be motivated to avoid simulated harm. Sometimes researchers have the opportunity to observe real, rather than simulated attacks, although these opportunities are usually difficult to come by. Researchers can monitor real world user behavior over long periods of time (in public or with permission of participants) and observe how users respond to risks that occur naturally, without researcher intervention. In this talk I will motivate the importance of security user studies and talk about a number of different user study approaches we have used at the CyLab Usable Privacy and Security Lab at Carnegie Mellon University.
 
Lorrie Faith Cranor is a Professor of Computer Science and of Engineering and Public Policy at Carnegie Mellon University where she is director of the CyLab Usable Privacy and Security Laboratory (CUPS). She is associate department head of the Engineering and Public Policy Department and co-director of the MSIT-Privacy Engineering masters program. In 2016 she served as Chief Technologist at the US Federal Trade Commission, working in the office of Chairwoman Ramirez. She is also a co-founder of Wombat Security Technologies, Inc, a security awareness training company. She has authored over 150 research papers on online privacy, usable security, and other topics. She has played a key role in building the usable privacy and security research community, having co-edited the seminal book Security and Usability (O'Reilly 2005) and founded the Symposium On Usable Privacy and Security (SOUPS). She also chaired the Platform for Privacy Preferences Project (P3P) Specification Working Group at the W3C and authored the book Web Privacy with P3P (O'Reilly 2002). She has served on a number of boards, including the Electronic Frontier Foundation Board of Directors, and on the editorial boards of several journals. In her younger days she was honored as one of the top 100 innovators 35 or younger by Technology Review magazine. More recently she was elected to the ACM CHI Academy, named an ACM Fellow for her contributions to usable privacy and security research and education, and named an IEEE Fellow for her contributions to privacy engineering. She was previously a researcher at AT&T-Labs Research and taught in the Stern School of Business at New York University. She holds a doctorate in Engineering and Policy from Washington University in St. Louis. In 2012-13 she spent her sabbatical as a fellow in the Frank-Ratchye STUDIO for Creative Inquiry at Carnegie Mellon University where she worked on fiber arts projects that combined her interests in privacy and security, quilting, computers, and technology. She practices yoga, plays soccer, and runs after her three children.
 

From On Body to Out of Body User Experience

Date and Time
Friday, December 1, 2017 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Prof. Olga Russakovsky

Professor James A. Landay

Today’s most common user interfaces represent an incremental change from the GUI popularized by the Apple Macintosh in 1984. Over the last 30 years the dominant hardware has changed drastically while the user interface has barely moved: from one hand on a mouse to two fingers on a panel of glass. I will illustrate how we are building on-body interfaces of the future that further engage our bodies by using muscle sensing for input and vibrotactile output, offering discrete and natural interaction on the go. I will also show how other interfaces we are designing take an even more radical approach, moving the interface off the human body altogether and onto drones that project into the space around them. Finally, I will introduce a new project where we envision buildings as hybrid physical-digital spaces that both sense and actuate to improve human wellbeing.

Bio: James Landay is a Professor of Computer Science and the Anand Rajaraman and Venky Harinarayan Professor in the School of Engineering at Stanford University. He specializes in human-computer interaction. He is the founder and co-director of the World Lab, a joint research and educational effort with Tsinghua University in Beijing. Previously, Landay was a Professor of Information Science at Cornell Tech in New York City and prior to that he was a Professor of Computer Science & Engineering at the University of Washington and a Professor in EECS at UC Berkeley. From 2003 through 2006 he was the Laboratory Director of Intel Labs Seattle, a university affiliated research lab that explored the new usage models, applications, and technology for ubiquitous computing. He was also the chief scientist and co-founder of NetRaker, which was acquired by KeyNote Systems in 2004. Landay received his BS in EECS from UC Berkeley in 1990, and MS and PhD in Computer Science from Carnegie Mellon University in 1993 and 1996, respectively. He is a member of the ACM SIGCHI Academy and he is an ACM Fellow.

CorfuDB: Transactional Data Services over a Shared Log

Date and Time
Tuesday, November 18, 2014 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Speaker
Dahlia Malkhi, from until recently: Microsoft Research, Silicon Valley
Host
Michael Freedman

Dahlia Malkhi

Dahlia Malkhi

Conventional wisdom has it that the only way to scale replicated services is by partitioning the data. What would you do if given an infrastructure that breaks the seeming tradeoff between consistency and scale?
 
The talk will describe our experience with building CorfuDB, a distributed fabric that drives consistency and transactional guarantees at high-throughput. CorfuDB facilitates building distributed services in which in-memory data-structures are backed by a shared log. The core is built around the CORFU log, which clients can append-to and read-from over a network. Internally, CORFU is distributed over a cluster of machines with no single I/O bottleneck to either appends or reads. Atop CORFU is Tango, a fabric for programming transactional data services such as the Hyder DB and an Apache-ZooKeeper alternative.

Dahlia Malkhi has been a principal researcher at Microsoft Research, Silicon Valley from 2004 until the lab shut down in Sep 2014.  She works on applied and foundational research in reliability, consistency and security of distributed computing since the early nineties.
 
Prior to joining Microsoft Research, Dr. Malkhi was an associate professor at the Hebrew University of Jerusalem (1999-2007), left for a brief sabbatical, but was bitten by the Silicon Valley bug and remained at Microsoft. She holds a PhD (1994), M.Sc and B.Sc in computer science from the Hebrew University of Jerusalem, making her the only CS faculty to return to the Hebrew U. for all four academic stages. Dr. Malkhi was elected an ACM fellow in 2011, received the IBM Faculty award in 2003 and 2004 and the German-Israeli Foundation (G.I.F.) Young Scientist award in 2002. She serves on the editorial boards of the IEEE Transactions of Dependable and Secure Computing and of the Distributed Computing Journal. She chaired LADIS 2012 , Locality 2007, PODC 2006, Locality 2005 and DISC 2002.

What Google Glass means for the future of photography

Date and Time
Monday, October 20, 2014 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Jianxiong Xiao

Marc Levoy

Marc Levoy

Although head-mounted cameras (and displays) are not new, Google Glass has the potential to make these devices commonplace.  This has implications for the practice, art, and uses of photography.  So what's different about doing photography with Glass?  First, Glass doesn't work like a conventional camera; it's hands-free, point-of-view, always available, and instantly triggerable. Second, Glass facilitates different  uses than a conventional camera: recording documents, making visual todo lists, logging your life, and swapping eyes with other Glass users.  Third, Glass will be an open platform, unlike most cameras.

This is not easy, because Glass is a heterogeneous computing platform, with multiple processors having different performance, efficiency, and programmability.  The challenge is to invent software abstractions that allow control over the camera as well as access to these specialized processors. Finally, devices like Glass that are head-mounted and perform computational photography in real time have the potential to give wearers "superhero vision", like seeing in the dark, or magnifying subtle motion or changes.  If such devices can also perform computer vision in real time and are connected to the cloud, then they can do face recognition, live language translation, and information recall.  The hard part is not imagining these capabilities, but deciding which ones are feasible, useful, and socially acceptable.

Marc Levoy is the VMware Founders Professor of Computer Science, Emeritus. Education: B. Architecture and M.S. from Cornell (1976,1978), PhD in Computer Science from University of North Carolina (1989).  In previous lives he worked on computer-assisted cartoon animation (1970s), volume rendering (1980s), 3D scanning (1990s), and computational photography (2000s), including light field photography and microscopy.  At Stanford he taught computer graphics, digital photography, and the science of art.  Outside of academia, Levoy co-designed the Google book scanner, launched Google's Street View project, and currently leads a team in GoogleX working on Project Glass and the Nexus 5's HDR+ mode. Awards: Charles Goodwin Sands Medal for best undergraduate thesis (1976), National Science Foundation Presidential Young Investigator (1991), ACM SIGGRAPH Computer Graphics Achievement Award (1996), ACM Fellow (2007).

The Unreasonable Effectiveness of Deep Learning

Date and Time
Wednesday, October 22, 2014 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Distinguished Colloquium Series Speaker
Host
Jianxiong Xiao

Yann LeCun

Yann LeCun

The emergence of large datasets, parallel computers, and new machine learning methods, have enabled the deployment of highly-accurate computer perception systems and are opening the door to a wide deployment of AI systems.

A key component in AI systems is a module, sometimes called a feature extractor, that turns raw inputs into suitable internal representations. But designing and building such a module requires a considerable amount of engineering efforts and domain expertise.

Deep Learning methods have provided a way to automatically learn good representations of data from labeled or unlabeled samples. Deep architectures are composed of successive stages in which data representations are increasingly global, abstract, and invariant to irrelevant transformations of the input. Deep learning enables end-to-end training of these architectures, from raw inputs to ultimate outputs.

The convolutional network model (ConvNet) is a particular type of deep architecture somewhat inspired by biology, which consists of multiple stages of filter banks, interspersed with non-linear operators, and spatial pooling. ConvNets have become the record holder for a wide variety of benchmarks, including object detection, localization and recognition in image, semantic segmentation and labeling, face recognition, acoustic modeling for speech recognition, drug design, handwriting recognition, biological image segmentation, etc.

The most recent systems deployed by Facebook, Google, NEC, IBM, Microsoft, Baidu, Yahoo and others for image understanding, speech recognition, and natural language processing use deep learning. Many of these systems use very large and very deep ConvNets with billions of connections, trained in supervised mode. But many new applications require the use of unsupervised feature learning. A number of such methods based on sparse auto-encoder will be presented.

Several applications will be shown through videos and live demos, including a category-level object recognition system that can be trained on the fly, a scene parsing system that can label every pixel in an image with the category of the object it belongs to (scene parsing), an object localization and detection system, and several natural language processing systems. Specialized hardware architectures that run these systems in real time will also be described.

Yann LeCun is Director of AI Research at Facebook, and Silver Professor of Data Science, Computer Science, Neural Science, and Electrical Engineering at New York University, affiliated with the NYU Center for Data Science, the Courant Institute of Mathematical Science, the Center for Neural Science, and the Electrical and Computer Engineering Department.

He received the Electrical Engineer Diploma from Ecole Supérieure d'Ingénieurs en Electrotechnique et Electronique (ESIEE), Paris in 1983, and a PhD in Computer Science from Université Pierre et Marie Curie (Paris) in 1987. After a postdoc at the University of Toronto, he joined AT&T Bell Laboratories in Holmdel, NJ in 1988. He became head of the Image Processing Research Department at AT&T Labs-Research in 1996, and joined NYU as a professor in 2003, after a brief period as a Fellow of the NEC Research Institute in Princeton. From 2012 to 2014 he directed NYU's initiative in data science and became the founding director of the NYU Center for Data Science. He was named Director of AI Research at Facebook in late 2013 and retains a part-time position on the NYU faculty.

His current interests include AI, machine learning, computer perception, mobile robotics, and computational neuroscience.  He has published over 180 technical papers and book chapters on these topics as well as on neural networks, handwriting recognition, image processing and compression, and on dedicated circuits and architectures for computer perception. The character recognition technology he developed at Bell Labs is used by several banks around the world to read checks and was reading between 10 and 20% of all the checks in the US in the early 2000s.  His image compression technology, called DjVu, is used by hundreds of web sites and publishers and millions of users to access scanned documents on the Web. Since the mid 1980's he has been working on deep learning methods, particularly the convolutional network model, which is the basis of many products and services deployed by companies such as Facebook, Google, Microsoft, Baidu, IBM, NEC, AT&T and others for image and video understanding, document recognition, human-computer interaction, and speech recognition.

LeCun has been on the editorial board of IJCV, IEEE PAMI, and IEEE Trans. Neural Networks, was program chair of CVPR'06, and is chair of ICLR 2013 and 2014. He is on the science advisory board of Institute for Pure and Applied Mathematics, and Neural Computation and Adaptive Perception Program of the Canadian Institute for Advanced Research. He has advised many large and small companies about machine learning technology, including several startups he co-founded. He is the lead faculty at NYU for the Moore-Sloan Data Science Environment, a $36M initiative in collaboration with UC Berkeley and University of Washington to develop data-driven methods in the sciences. He is the recipient of the 2014 IEEE Neural Network Pioneer Award.

Follow us: Facebook Twitter Linkedin