Keynotes

Keynotes at a Glance

Tuesday, 5 May, 2015, 8:30 – 9:30: Matthew Turk – TBD
Wednesday, 6 May, 2015, 8:30 – 9:30: Ursula Hess – The Social Signal Value of Emotion Expressions: The Impact of Context and Culture
Thursday, 7 May, 2015, 8:30 – 9:30: Louis-Philippe Morency – Multimodal Machine Learning: Modeling Human Communication Dynamics

TBD

Matthew Turk

Tuesday, 5 May 2015

turkBio: Matthew Turk received a B.S. from Virginia Tech in 1982, then an M.S. from Carnegie Mellon University in 1984, where his master’s work was in the area of robot fine motion planning. He worked for Martin Marietta Denver Aerospace from 1984 to 1987, primarily on vision for autonomous robot navigation (part of DARPA’s ALV program, the precursor to the more recent DARPA Grand Challenge events). In 1987 he went to the Massachusetts Institute of Technology, where he received a Ph.D. from the Media Lab in 1991 for his work on automatic face recognition. A paper on this work received an IEEE Computer Society Outstanding Paper award at the IEEE Conference on Computer Vision and Pattern Recognition in 1991. After a brief post-doc at MIT, in 1992 Matthew moved to Grenoble, France as a visiting researcher at LIFIA/ENSIMAG, then took a position at Teleos Research (in Palo Alto, CA) in 1993. In 1994, Matthew joined Microsoft Research as a founding member of the Vision Technology Group. In 2000 he joined the faculty of the University of California, Santa Barbara (UCSB), where he is now a full Professor in the Computer Science Department and former Chair (2005-2010) of the Media Arts and Technology Graduate Program. He co-directs the Four Eyes Lab, where the research focus is on the “four I’s” of Imaging, Interaction, and Innovative Interfaces. He is a founding member and former chair of the advisory board for the International Conference on Multimodal Interfaces and on the editorial board of the Journal of Image and Vision Computing and the ACM Transactions on Intelligent Interactive Systems. He was a general chair of ACM Multimedia 2006, IEEE Conference on Automatic Face and Gesture Recognition 2011, and the 2014 IEEE Conference on Computer Vision and Pattern Recognition. He is an IEEE Fellow, an IAPR Fellow, and the recipient of the 2011-2012 Fulbright-Nokia Distinguished Chair in Information and Communications Technologies.

The social signal value of emotion expressions: The impact of context and culture

Professor Ursula Hess

Wednesday, 6 May 2015

hessAbstract: Emotions are not expressed in a social vacuum. Rather, they are expressed in a social context. Thus, the same smile may signal something very different when shown in response to the success of another person or their failure. In human interactions context is often implicitly described by the normative expectations that people have with regard to an event and the likely reactions of the protagonists. These expectations also vary with culture. Thus, the same expressions may be expected and appropriate in one cultural context but less appropriate in another. In this presentation I discuss the impact of context and culture for the social signals that are transmitted through emotion expressions.

Bio: Prof. Dr. Ursula Hess is Professor of social- and organizational psychology at the Humboldt-Universität zu Berlin. She received her Ph.D. at Dartmouth College and held for 17 years a faculty position at the University of Quebec at Montreal. She is associate editor for the Journal of Nonverbal Communication, Cognition and Emotion, and IEEE Transactions on Affective Computing and Fellow of the Society for Personality and Social Psychology and the Association for Psychological Science. Her main research domain is the communication of emotions. She has published over 100 peer reviewed articles and book chapters.

Multimodal Machine Learning: Modeling Human Communication Dynamics

Louis-Philippe Morency

Thursday, 7 May 2015

morencyAbstract: Human face-to-face communication is a little like a dance, in that participants continuously adjust their behaviors based on verbal and nonverbal cues from the social context. Today’s computers and interactive devices are still lacking many of these human-like abilities to hold fluid and natural interactions. Multimodal machine learning addresses this challenge of creating algorithms and computational models able to analyze, recognize and predict human subtle communicative behaviors in social context. I formalize this new research endeavor with a Human Communication Dynamics framework, addressing four key computational challenges: behavioral dynamic, multimodal dynamic, interpersonal dynamic and societal dynamic. Central to this research effort is the introduction of new probabilistic models able to learn the temporal and fine-grained latent dependencies across behaviors, modalities and interlocutors. In this talk, I will present some of our recent achievements modeling multiple aspects of human communication dynamics, motivated by applications in healthcare (depression, PTSD, suicide, autism), education (learning analytics), business (negotiation, interpersonal skills) and social multimedia (opinion mining, social influence).

Bio: Louis-Philippe Morency is Assistant Professor in the Language Technology Institute at the Carnegie Mellon University where he leads the Multimodal Communication and Machine Learning Laboratory (MultiComp Lab). He received his Ph.D. and Master degrees from MIT Computer Science and Artificial Intelligence Laboratory. In 2008, Dr. Morency was selected as one of “AI’s 10 to Watch” by IEEE Intelligent Systems. He has received 7 best paper awards in multiple ACM- and IEEE-sponsored conferences for his work on context-based gesture recognition, multimodal probabilistic fusion and computational models of human communication dynamics. For the past three years, Dr. Morency has been leading a DARPA-funded multi-institution effort called SimSensei which was recently named one of the year’s top ten most promising digital initiatives by the NetExplo Forum, in partnership with UNESCO.