Demos

The following demos have been accepted for presentation at FG2015

3D Interaction Design: Increasing the Stimulus-Response Correspondence by using Stereoscopic Vision

Toni Fetzer, Christian Petry

Short description: This demo application presents a hand-based interaction approach for grabbing and manipulating virtual objects within immersive virtual environments. Since most people are used to work in monoscopic virtual environments, we created this application to analyze if an interaction approach using stereoscopic vision, compared to a conventional monoscopic vision, results in a higher user acceptance and stimulus-response correspondence. Video available on Youtube and Vimeo.

Real-Time Dense 3D Face Alignment from 2D Video with Automatic Facial Action Unit Coding

Laszlo Jeni, Jeffrey Girard, Jeff Cohn, Takeo Kanade

Short description: Face alignment is the problem of registering a parameterized shape model to an image such that its landmarks correspond to consistent locations on the face. Previous methods (e.g., Constrained Local Models or Supervised Descent Methods) either locate a small number of fiducial points in real time or fit a high-resolution 3D model at a much higher cost (3D Morphable Models). We present a system for dense 3D face alignment from 2D video capable of real-time performance and manageable storage size. The system produces precise, dense shape information for spontaneous facial action unit detection. Video available.

Controllable Face Privacy

Li Zhang, Terence Sim

Short description: We present how to controllably protect face privacy by synthesis. The synthesized face in our system can protect identity privacy, and also allow other computer vision analyses, such as gender detection, to proceed unimpeded. This is therefore useful for reaping the benefits of surveillance cameras while preventing privacy abuse. Extensive experiments show that our synthesis method is effective.

Semi-transparent mirror with hidden camera to assess human emotions

Martin Šavc, Damjan Zazula, Jurij Munda, Božidar Potočnik

Short description: Semi-transparent mirror with hidden camera was developed in the EU funded Biomedical Engineering Competence Centre. Embedded computer algorithms detect and track faces, recognize persons in front of the mirror, and assess their skin color consistently and their emotions in real time. Video available. (edited by: Sebastijan Šprager)

A Live Video Analytic System for Affect Analysis in Public Space

Jixu Chen, Ming-Ching Chang, Peter Tu

Short description: We propose a video analytic system capable of analyzing group density and social signals such as affect in public space. In contrast to most facial analysis systems that operate on fixed cameras, this demonstration incorporates Pan-Tilt-Zoom (PTZ) camera control and facial video analysis hand in hand, in order to effectively locate individuals, distill expression signals and provide visual feedback in real-time. We assume individuals can sit on a fixed number of seats or benches. Our PTZ control module operates cooperatively with a face analysis module to actively search for facial shots in the public space and perform expression and pose analysis. Group density is estimated as the occupancy rate with respect to the possible seating locations and group-level affect is inferred from aggregate facial expression recognition.

3D Face Recognition Utilizing a Low-Cost Depth Sensor

Stepan Mracek, Radim Dvorak, Jan Vana, Tomas Novotny, Martin Drahansky

Short description: This demo shows a working prototype of the 3D face recognition biometric device utilizing a low-cost depth sensor, namely SoftKinetic DS325. It is based on the Intel Celeron board for embedded PCs, the sensor, and a touch screen. Video available on Youtube.

Faces and Thoughts: An Empathic Dairy

José Mennesson, Benjamin Allaert, Ioan Marius Bilasco, Nico van der Aa, Alexandre Denis, Samuel Cruz-Lara

Short description: Many diary apps have been developed for an Android mobile device. Although most concentrate on securing the privacy and adding emoticons, only a few include automatic emotion measurements. This demo shows a new diary app including real-time multi-modal emotion measurements to capture the affective state of the user from the text provided and video images made. The emotion measurements from the Emotion from Face module, that analyzes images from the front camera, and the Emotion from Text module, that analyzes the text written by the user, are merged within the Emotion Fusion module to estimate the user’s affective state more robustly. The app allows the user to have empathic feedback for each session. Video available.

Let me Tell You about Your Personality! Real-time Personality Prediction from Nonverbal Behavioural Cues

Oya Celiktutan, Evangelos Sariyanidi, Hatice Gunes

Short description: Although automatic personality analysis has been studied extensively in recent years, it has not yet been adopted for real time applications and real life practices. To the best of our knowledge, this demonstration is a first attempt at predicting the widely used Big Five personality dimensions and a number of social dimensions from nonverbal behavioural cues in real-time. The proposed system aims to analyse the nonverbal behaviour of the person that interacts with a small humanoid robot through a live streaming camera, and delivers the predicted personality and social dimensions on the fly. Video available.

Who do you want to be? Real-time Face Swap

Tim den Uyl, Emrah Tasli, Paul Ivan, Mariska Snijdewind

Short description: This demonstration paper presents a face swap application where two people’s faces are automatically exchanged in real-time without any calibration or training. This is performed using the Active Appearance Models technique. A realistic visualization is achieved using an adaptive texture sampling technique. The face swap is performed irrespective of the sex, age or ethnicity of the subject in front of the camera. This application is intended for gaming, shopping, educational or entertainment purposes and will be presented in a real-time setup during the demo session.

Real-Time Facial Character Animation

Emrah Tasli, Tim den Uyl, Hugo Boujut, Titus Zaharia

Short description: This demonstration paper presents a real-time facial character animation application where the facial expressions of a person are simultaneously synthesized on a virtual avatar. The proposed method does not require any training or calibration for the person interacting with the system. An Active Appearance Model based technique is used to track more than 500 points on the face to create the animated expression of the virtual avatar. The sex, age or ethnicity of the subject in front of the camera can also be automatically analyzed and hence the visualization of the avatar could be adapted accordingly. This application requires a standard web cam and is intended for gaming, entertainment or video conference purposes and will be presented in a real-time setup during the demo session.

IntraFace

Fernando De La Torre, Wen-Sheng Chu, Xuehan Xiong, Francisco Vincente, Jeff Cohn

Short description: The face is one of the most powerful channels of nonverbal communication. Facial expression provides cues about emotion, intention, alertness, pain, personality, regulates interpersonal behavior, and communicates psychiatric and biomedical status among other functions. IntraFace is a technology for facial image analysis developed on the last 10 years by Carnegie Mellon University and University of Pittsburgh. We will show demos for real-time facial feature tracking, facial expression analysis, gaze estimation, head pose and facial attribute recognition (e.g., gender, ethnicity, age). IntraFace will be free of charge for the research community and we would like to make a demonstration of its use.