Multimodal (Audio, Facial and Gesture) based Emotion Recognition Challenge
People express emotions through different modalities. Integration of verbal and non-verbal communication channels creates a system in which the message is easier to understand. Expanding the focus to several expression forms can facilitate research one motion recognition as well as human-machine interaction. In this competition, the authors present a Polish emotional database composed of three modalities: facial expressions, body movement and gestures, and speech. The corpora contains recordings registered in studio conditions, acted out by 16 professional actors (8 male and 8 female). The data is labeled with six basic emotions categories, according to Ekman’s emotion categories.
The participants will have to analyze all 3 modalities and, based on all 3 modalities, perform the emotion recognition. The participants must submit the code and all dependencies via codalab and the organizer will run the codes. The evaluation would be based on the average correct emotion recognition using each modalities as well as all 3 modalities together. In case of equal performance, the processing time will be used in order to indicate the ranking. The Training data will be provided followed by the validation dataset. The test data will be finally launched with no label and it will be used for the evaluation of participants.
List of organisers:
- Dorota Kaminska and Tomasz Sapiński - Lodz University of Technology, Poland
- Kamal Nasrollahi - University of Aalborg, Denmark
- Hasan Demirel - Eastern Mediterranean University, Turkey
- Cagri Ozcinar - Trinity College Dublin, Ireland
- Gholamreza Anbarjafari - iCV Lab, University of Tartu, Estonia