Quick links

Exploring Emotion and Expression through Music Technology

Date and Time
Tuesday, April 27, 2010 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Speaker
Youngmoo Kim, from Drexel
Host
Kenneth Steiglitz
Over the past few decades, advances in computing and signal processing have had a profound impact on music performance, recording, and reproduction. The universal appeal of music, however, lies in its ability to provide expression to our feelings, and the recognition or utilization of these concepts computationally remains beyond the capability of current systems. Expression and emotion are, of course, highly subjective terms, ill-suited to quantitative definition and exploration. There may be considerable disagreement among listeners regarding the emotional content of a particular piece, and the expressive intentions of a highly skilled performer can be equally difficult to quantify. But innovations in sensing, human-computer interaction, and machine learning have the potential to reveal insights into these opaque domains.

This presentation will highlight research by the Music & Entertainment Technology Laboratory (MET-lab) at Drexel University exploring music, emotion, and creative expression under the common vision of making music more interactive and accessible for both musicians and non-musicians. These projects encompass the recognition of emotion, such as a system for dynamic musical mood prediction and a collaborative web game for the collection of emotional annotations, as well as interfaces for expressive performance, including a novel electromagnetic approach to shaping the sound of the acoustic piano and a user-friendly controller for remixing music in terms of emotion. These and other MET-lab efforts are closely coupled with educational initiatives, many of which have been deployed in K-12 outreach programs in the Philadelphia region, to promote learning in Science, Technology, Engineering, and Mathematics (STEM).

Youngmoo Kim is an Assistant Professor of Electrical and Computer Engineering at Drexel University. His research group, the Music & Entertainment Technology Laboratory (MET-lab) focuses on the machine understanding of audio, particularly for music information retrieval. Other areas of active research at MET-lab include analysis-synthesis of sound, human-machine interfaces and robotics for expressive interaction, and K-12 outreach for engineering, science, and mathematics education. Youngmoo received his Ph.D. from the MIT Media Lab in 2003 and also holds Masters degrees in Electrical Engineering and Music (Vocal Performance Practice) from Stanford University. He served as a member of the MPEG standards committee, contributing to the MPEG-4 and MPEG-7 audio standards, and he co-chaired the 2008 International Conference on Music Information Retrieval (hosted at Drexel). His research is supported by the National Science Foundation and the NAMM Foundation, including an NSF CAREER award in 2007.

Follow us: Facebook Twitter Linkedin