Character Guided Listening

Research Intern
2015. 8 - 2017. 5
For an opportunity of novel entertainment, I examined if an animated character made from real-time facial capture of a music expert stimulates emotional arousal of novice listeners during music listening. I conducted an experiment with 76 undergraduate students to find if the character can arouse presented emotions. The submitted paper is accepted in HCI Korea 2017.
Methods & Toolkit
Emotion Self Reporting, Interview, Unity3D & Faceware


Would a virtual character of a music expert changing facial expression according to a piece of music help listeners appreciate the music better?

Experiment Design


I conducted an experiment on 76 undergraduate students, arranging two groups such that the control group listened to a particular piece of music without any visual display and the experimental group listened to the same piece of music in company with a character with motion-captured facial expressions.

Music Selection

Our team focused on genre of classical music for the experiment. Classical music is relatively difficult to access and appreciate compared to other genres for modern listeners. For this experiment, we used a classical orchestra piece “Jupiter” (7’ 46”) composed by Gustav Holst. The piece has melodic and temporal dynamics that can potentially incite a wider range of emotions, therefore, was germane as the experiment stimuli.

Character Design

We created a video that has displays an animated face of a 3D character which expressed feelings according to the music’s progress. We used the software Faceshift, a real-time 3D facial motion capture software, to record a recruited music expert. She was a double majoring student studying violin and acting, thus a suitable model. We provided her the music in advance so that she could understand the piece’s emotional structure well enough before the performance. After a short training, we recorded her facial articulations as she freely expressed her interpretations during listening.

While an actress who is double major in music and acting listened to the experiment music, while her facial expressions were captured on Faceshift

Experiment Process

The experiment consisted of 12 sessions, and 5-7 people in respective session. Each session lasted not more than 20 minutes. After listening to the first movement of the music, all participants filled out a questionnaire of quantitative questions about the actual feelings they experienced while listening and also qualitative questions about the overall listening experience, listening environment, and the effect of the character. We used the Geneva Emotional Music Scales (GEMS-9), the first validated scale to measure one’s musical emotions, as our tool for quantitative measurement.

Future Study

The first experiment had limitations: The crude appearance of the character might have negatively influenced the feeling of the participants. Also, the GEMS-9 scale might not be appropriate since there were no successful previous Korean study cases that used the Korean version of the scale. Thus, in the following experiment, we decided to exclude the possibility of preference or rejection reflex of participants which can be aroused from the character by using certain popular character 3D model and only recruiting people who like the character. Also, we will use basic discrete emotion model instead of GEMS to clarify the emotions when the participants record their feelings.


We could find statistically meaningful relation between one of the feelings and character performance. Moreover, based on the answers from qualitative questions we were able to extract a guideline for character design in future experiment.