The 2nd Workshop & Challenge on Micro-gesture Analysis for Hidden Emotion Understanding (MiGA)

To be held at IJCAI 2024, 3th-9th August 2024, Jeju, South Korea

Welcome to MiGA Workshop & Challenge 2024

We jointly hold the second workshop and challenges for Micro-gesture Analysis for Hidden Emotion Understanding (MiGA) at IJCAI 2024, 3rd-9th August 2024.

We warmly welcome your contribution and participation!

News

18 July : Workshop schedule is now available!(See workshop Tab)

3 July : Our MiGA Workshop will be held on August 4th, Jeju, Korea!

3 June : Workshop paper notification date is extended to 7th June 2024

12 May : Challenge is officially over, please submit your code by 15th May, Check details on Kaggle discussion section

02 May : Testing set for phase2 of both tracks released. Data is from iMiGUE and SMG datasets and contains both skeleton and rgb videos!

09 April : Training set and validation set for both tracks released. Data is from iMiGUE and SMG datasets and contains both skeleton and rgb videos!

15 March : Great news! The 2nd MiGA workshop has been accepted at IJCAI for 2024!

23 March : The website of MiGA workshop & challenge is avaible.

29 March : The Kaggle website of MiGA challenge is available for the both tracks, the training data will be released in few days.

Overview

We hold the 2nd MiGA Workshop & Challenge to explore using body gestures for hidden emotional state analysis, to be held at IJCAI 2024, 3th-9th August 2024, Jeju, South Korea.

As an important non-verbal communicative fashion, human body gestures are capable of conveying emotional information during social communication. In previous works, efforts have been made mainly on facial expressions, speech, or expressive body gestures to interpret classical expressive emotions. Differently, we focus on a specific group of body gestures, called micro-gestures (MGs), used in the psychology research field to interpret inner human feelings.

Micro-gesture

MGs are subtle and spontaneous body movements that are proven, together with micro-expressions, to be more reliable than normal facial expressions for conveying hidden emotional information.

The aim of our MiGA workshop & challenge is to build a united, supportive research community for micro-gesture analysis and related emotion understanding problems. It will facilitate discussions between different research labs in academia and industry, identify the main attributes that can vary between gesture-based emotional understanding, and discuss the progress that has been made in this field so far, while identifying the next immediate open problems the community should address. We provide two different datasets and related benchmarks and with to inspire a new way of utilizing body gestures for human emotion understanding and bring a new direction to the emotion AI community.

Workshop Topics

    The workshop supplementing the challenge covers a wider scope, i.e., any paper that is related to gesture and micro-gesture analysis for emotion understanding but is not directly about any of the challenge tracks can be submitted as a workshop paper. The topic includes but is not limited to:
  • Gesture and micro-gesture analysis for emotion understanding.
  • Vision-based methodologies for gesture-based emotion understanding, e.g., classification, detection, online recognition, generation, and transferring.
  • Solutions for special challenges involved with the in-the-wild gesture analysis, e.g., severely imbalanced sample distribution, high heterogeneous samples of interclass, noisy irrelevant motions, noisy backgrounds, etc.
  • Different modalities developed for emotion understanding, e.g., body gestures, attentive gazes, and desensitized voices.
  • New data collected for the purpose of hidden emotion understanding.
  • Psychological study and neuroscience research about various body behaviors and their links to emotions.
  • Hardware/apparatuses/imaging systems developed for the purpose of hidden emotion understanding.
  • Applications of gestures and micro-gestures, e.g., for medical assessment in hospitals (ADHD, depression), for health surveillance at home or in other environments, for emotion assessment in various scenarios like for education, job interview, etc.

References

  • Chen H., Shi H., Liu X., Li X., and Zhao G. SMG: A Micro-gesture Dataset Towards Spontaneous Body Gestures for Emotional Stress State Analysis. International Journal of Computer Vision (IJCV 2023), 2023: 1-21.
  • Liu, X., Shi H., Chen H., Yu Z., Li X., and Zhao G. "iMiGUE: An identity-free video dataset for micro-gesture understanding and emotion analysis." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021), pp. 10631-10642. 2021.
  • Chen H., Liu X., Li X., Shi H., & Zhao G. Analyze spontaneous gestures for emotional stress state recognition: A micro-gesture dataset and analysis with deep learning. 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019): 1-8.

Related works

  • Guo X., Peng W., Huang H., and Xia Z. Micro-gesture Online Recognition with Graph-convolution and Multiscale Transformers for Long Sequence. MiGA-IJCAI workshop 2023.
  • Li K.; Guo D.; Chen G., Peng X., Wang M. Joint Skeletal and Semantic Embedding Loss for Micro-gesture Classification. MiGA-IJCAI workshop 2023.
  • Guo D., Li K., Hu B., Zhang Y., Wang M. Benchmarking Micro-action Recognition: Dataset, Methods, and Applications. IEEE Transactions on Circuits and Systems for Video Technology 2024.

Workshop Details

Introduction

Body gestures are an important form to reveal people’s emotions alongside facial expressions and speeches. For special occasions when people intend to control or hide their true feelings, e.g., for social etiquette or other reasons, body gestures are harder to control and thus the more revealing clues of actual feelings compared to the face and the voice. Microgestures (MG) are defined as a special category of body gestures that are indicative of humans’ emotional status. Representative instances include: scratching head, touching nose, and rubbing hands, which are not intended to be shown when communicating with others, but occur spontaneously due to e.g., felt stress or discomfort as shown in Figure 1. The MGs differ from indicative gestures which are performed on purpose for facilitating communications, e.g., using gestures to assist verbal expressions during a discussion. Although research on general body gestures is prevailing, it is largely about human’s macro body movement, lacking the finer level consideration such as the micro-gestures discussed above. As well the studies are mainly concerned with recognizing the movement performed expressively, and the link between gestures and hidden emotions is yet to be explored.

Micro-gesture

Fig. 1. Micro-gesture examples. A tennis player is talking while spontanously performing the body gestures in the post-match interview.

Motivated by the above observations, we propose to jointly hold the second MiGA challenge and workshop on micro-gesture analysis for hidden emotion understanding (MiGA) to fill the gap in the current research field. The MiGA workshop and challenge aim to promote research on developing AI methods for MG analysis toward the goal of hidden emotion understanding. As the proposed MiGA will be the second event, we propose to hold it in workshop + challenge mode with a focus on the competition part that builds and provides benchmark datasets and a fair validation platform for researchers working in the MG classification and online recognition for identity-insensitive emotion understanding. The workshop covers a wider scope than the challenge, i.e., any research that provides theoretical and practical support for gesture and micro-gesture analysis and emotion understanding.

Workshop Topics
    The workshop supplementing the challenge covers a wider scope, i.e., any paper that is related to gesture and micro-gesture analysis for emotion understanding but is not directly about any of the challenge tracks can be submitted as a workshop paper. The topic includes but is not limited to:
  • Gesture and micro-gesture analysis for emotion understanding.
  • Vision-based methodologies for gesture-based emotion understanding, e.g., classification, detection, online recognition, generation, and transferring.
  • Solutions for special challenges involved with the in-the-wild gesture analysis, e.g., severely imbalanced sample distribution, high heterogeneous samples of interclass, noisy irrelevant motions, noisy backgrounds, etc.
  • Different modalities developed for emotion understanding, e.g., body gestures, attentive gazes, and desensitized voices.
  • New data collected for the purpose of hidden emotion understanding.
  • Psychological study and neuroscience research about various body behaviors and their links to emotions.
  • Hardware/apparatuses/imaging systems developed for the purpose of hidden emotion understanding.
  • Applications of gestures and micro-gestures, e.g., for medical assessment in hospitals (ADHD, depression), for health surveillance at home or in other environments, for emotion assessment in various scenarios like for education, job interview, etc.
Submission Guidelines

- Papers must comply with the CEURART paper style (1 column) and can fall in one of the following categories:

- Full research papers(minimum 7 pages)

- Short research papers(4-6 pages)

- Position papers(2 pages)

- The CEURART template can be found on this Overleaf link .

- Accepted papers (after blind review of at least 3 experts) will be included in a volume of the CEUR Workshop Proceedings. We are also planning to organize a special issue and the authors of the most interesting and relevant papers will be invited to submit and extended manuscript.

- Workshop submissions will be handled by CMT submission system; the submission link is as follows: Paper Submission . All questions about submissions should be emailed to chen.haoyu at oulu.fi

Workshop Important Dates
  • May 30, 2024. Paper submission deadline.
  • June 04, 2024. June 07, 2024. Notification to authors.
  • June 10, 2024.. June 12, 2024. Camera-ready deadline.
  • August 4rd, 2024. MiGA IJCAI 2024 Workshop, Jeju, South Korea.
Workshop Program

The proposed workshop (including the challenge) will be held as a full-day event, on August 4th, 2024.

  • Session 1
  • 9:00-9:15: Opening Remarks
  • 9:15-9:35: Group Introduction (networking)
  • 9:35-10:30: Invited Talk 1: Artificial Emotional Intelligence and Bodily Expression In the Wild (Prof. James Wang)
  • 10:30-11:00: First Coffee Break
  • Session 2
  • 11:00-11:15: Introduction of MiGA 2024
  • 11:15-11:35: Prototype Learning for Micro-gesture Classification
  • 11:35-11:55: Multi-modal Micro-gesture Classification via Multi-scale Heterogeneous Ensemble Network
  • 11:55-12:30: Group discussion (what makes Micro-gesture)
  • 12:30-14:00: Lunch Break
  • Session 3
  • 14:00-14:50: Invited Talk 2: Human motion understanding and synthesis (Associate Prof. Hao Tang)
  • 14:50-15:10: A Spatio-temporal Event Transformer on Versatile Tasks for Human Behavior Analysis
  • 15:10-15:30: A Multimodal Micro-gesture Classification Model Based on CLIP
  • 15:30-16:00: Second Coffee Break
  • Session 4
  • 16:00-16:20: Micro-gesture Online Recognition with Dual-stream Multi-scale Transformer in Long Videos
  • 16:20-16:40: Micro-gesture Online Recognition using Learnable Query Points
  • 16:40-17:10: Group discussion (what makes Micro-gesture beyond)
  • 17:10-17:20: Award ceremony
  • 17:20-17:30: Wrap-up/Closing Remarks

Note: Each paper must be presented on-site by an Author/co-Author at the conference.

Challenge Details

The MiGA challenge is planned as a continuous, annual event, and each years MiGA will include different (multiple) challenge tasks, with large-scale datasets with expanding size. This year in MiGA, we will focus on two fundamental tasks of micro-gestures: classification and online recognition. The tracks of MiGA will be extended to leveraging those identity-insensitive cues to achieve hidden emotion understanding in the future. The challenge will be based on two spontaneous datasets: One is the SMG dataset published in FG2019 “Analyze Spontaneous Gestures for Emotional Stress State Recognition: A Microgesture Dataset and Analysis with Deep Learning” and the other is the iMiGUE dataset published in CVPR2021 “iMiGUE: An Identity-free Video Dataset for Micro-Gesture Understanding and Emotion Analysis”. In this MiGA, we plan to set two fundamental challenge tasks (Tracks), and participating teams can choose to compete on one or both tasks. The challenge will be organized on the Kaggle website.

Track 1: Micro-gesture classification from short video clips. The MG datasets were collected from in-the-wild settings. Compared to ordinary action/gesture data, MGs concern more fine-grained and subtle body movements that occur spontaneously in practical interactions. Thus, learning those fine-grained body movement patterns, handling imbalanced sample distribution of MGs, and distinguishing the high heterogeneous MG samples of interclass are the big challenges to be addressed.

Track 2: Online micro-gesture recognition from long video sequences. Unlike any existing online action/gesture recognition datasets in which samples are well aligned/performed in the sequence, MGs samples occur spontaneously in any combinations or orders just like seen in daily communicative scenarios. Thus, the task of online micro-gesture recognition requires dealing with more complex body-movement transition patterns (e.g., co-occurrence of multiple MGs, incomplete MGs and complicated transitions between MGs, etc.) and detecting fine-grained MGs from irrelevant/context body movements, which poses new challenges that havent been considered in previous gesture research.

The rules and guidelines for the competition/challenge
1. Datasets

Datasets for the proposed challenge are available. The MiGA challenge is planned as a continuous annual event. Two benchmark datasets published on IJCV 2023 and CVPR 2021 are available and will be used for the challenge. The first one is the SMG dataset published in IJCV2023 Spontaneous Micro-Gesture (SMG) dataset consists of 3,692 samples of 17 MGs. The MG clips are annotated from 40 long video sequences (10-15 minutes) with 821,056 frames in total. The datasets were collected from 40 subjects while narrating a fake and real story to elicit the emotional states. The participants are recorded collected by Kinect resulting in four modalities, RGB, 3D skeletal joints, depth and silhouette. In this workshop, we allow participants to use the skeleton, RGB or both modalities. Details about SMG dataset


The second dataset is iMiGUE published in CVPR2021. Micro-Gesture Understanding and Emotion analysis (iMiGUE) dataset consists of 32 MGs plus one non-MG class collected from post-match press conferences videos of famous tennis players. The dataset consists of 18,499 samples of MGs to detect negative and positive emotions. The MG clips are annotated from 359 long video sequences (0.5-26 minutes) with 3,765,600 frames in total. The dataset contains RGB modality and 2D skeletal joints collected from Open-Pose. In this workshop, we allow participants to use the skeleton, RGB or both modalities. Details about iMiGUE dataset


Note that: 1) not all data are used for the challenge. 2) part of the data will be selected and tailored for different challenge tasks, please follow the Kaggle competition links to obtain and process the datasets.


2. Evaluation

We deploy a cross-subject evaluation protocol. For MG classification track, 13,936 and 3,692 MG clips from iMiGUE and SMG datasets will be used for training and validating, and the remaining 4,563 MG clips from iMiGUE will be used for testing. For MG online recognition track, 252 and 40 long sequences from iMiGUE and SMG datasets will be used for training and validating, and the remaining 104 long sequences from iMiGUE will be used for testing.


MG classification track: We report Top-1 accuracy on the testing setes on the iMiGUE dataset. Submissions will be ranked based on Top-1 accuracy on the overall split (if Top-1 results are the same, then Top-5 will be used to compare the results).


MG online recognition track: We jointly evaluate the detection and classification performances of algorithms by using the F1 score measurement defined below: F1 =2*Precision*Recall/(Precision+Recall), given a long video sequence that needs to be evaluated, P recision is the fraction of correctly classified MGs among all gestures retrieved in the sequence by algorithms, while Recall (or sensitivity) is the fraction of MGs that have been correctly retrieved over the total amount of annotated MGs.


Submission format for both tracks. Participants must submit their predictions in the a single .csv file on the Kaggle platform, more detailed instuctions for each track can be found on Kaggle. The results will be evaluated on the server and displayed on the ranking list in real-time. The organization team has the right to examine the participants source code to ensure the reproducibility of the algorithms. The final results and ranking will be confirmed and announced by organizers.


Participation guidelines
Please visit the Kaggle websites to join the competitions:

The 2nd MiGA-IJCAI Challenge Track 1: Micro-gesture Classification

The 2nd MiGA-IJCAI Challenge Track 2: Micro-gesture Online Recognition

Final results are available

We have examined all source code submitted and finalized the rankings of the MiGA2 challenge. Congratulations to the following teams on their final rankings:

Track 1: Micro-gesture Classification

1. 'HFUT-VUT'

2. 'NPU-MUCIS'

3. 'ywww11'


Track 2: Micro-gesture Online Recognition

1. 'NPU-MUCIS'

2. 'HFUT-VUT'

3. 'JDY203'

Important Dates (might slightly adjust later)
    The timeline for the Challenge will be organized as follows:
  • Mar 29, 2024. Call for Challenge online. Registration starts.
  • Apr 9, 2024. Release of training data.
  • May 2, 2024. Release of testing data.
  • May 12, 2024. Final testing data and result submission. Registration ends.
  • May 17, 2024. Release of challenge results.
  • May 30, 2024. Paper submission deadline (workshop).
  • June 04, 2024. June 07, 2024. Notification to authors.
  • June 04, 2024.. June 12, 2024. Camera-ready deadline.
  • August 3rd-9th, 2024. MiGA IJCAI 2024 Workshop, Jeju, Korea.

Contact us

Welcome to our Discord Channel and discuss with peers.
https://discord.gg/XPrvM8WjnP

    The contact info is listed as follows:
  • • For questions regarding workshop submissions and the competition, please get in touch with chen.haoyu at oulu.fi
  • • For questions about the workshop local arrangements and information, please get in touch with local@ijcai24.org
  • • For questions regarding general issue of the workshp program, please get in touch with guoying.zhao at oulu.fi