Research

Our research mainly involves understanding and automatic recognition of human affective and socio-relational constructs, such as emotions, empathy and rapport. Such technologies can be used to animate virtual humans or social robots. We aim to work towards technologies that do not replace but augment human-human interaction. We are also interested in studying the verbal and nonverbal behaviors associated with mental health disorders and identify the successful strategies for psychotherapy through computational approaches.

Emotion recognition

Emotions play a central role in shaping our behavior and guiding our decisions. Affective Computing strives to enable machines to recognize, understand and express emotions. Advancing automatic understanding of human emotions involves technical challenges in signal processing, computer vision and machine learning. The major technical challenges in emotion recognition include the lack of reliable labels, human variability and scarcity of labelled data. We have studied and evaluated machine-based emotion recognition methods from EEG signals, facial expression, pupil dilation and psychophysiological signals. Our past work demonstrated how behaviors associated cognitive appraisals can be used in an appraisal-driven framework for emotion recognition. We are also interested in understanding motivational components of emotions, including effort and action tendencies.

Representative publications

  • Y. Yin, L. Lu, Y. Wu, M. Soleymani. Self-Supervised Patch Localization for Cross-Domain Facial Action Unit Detection. In 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG), 2021.
  • Y. Yin, B. Huang, Y. Wu, M. Soleymani. Speaker-invariant adversarial domain adaptation for emotion recognition. 2020 International Conference on Multimodal Interaction (ICMI), 2020.
  • S. Rayatdoost, M. Soleymani. Cross-Corpus EEG-Based Emotion Recognition, IEEE Machine Learning for Signals Processing (MLSP), Aalborg, Denmark, 2018.
  • M. Soleymani, M. Mortillaro. Behavioral and Physiological Responses to Visual Interest and Appraisals: Multimodal Analysis and Automatic Recognition, Frontiers in ICT, 5(17), 2018.
  • M. Soleymani, S. Asghari-Esfeden, Y. Fu, M. Pantic. Analysis of EEG Signals and Facial Expressions for Continuous Emotion Detection, IEEE Transactions on Affective Computing, 7(1): pp. 17-28, 2016.

Computational mental health

The emerging field of computational mental health aspires to leverage the recent advancements in human-centered artificial intelligence to assist human care in delivering mental health support. For example, automatic human behavior tracking can be used to train psychotherapists and detect behavioral markers associated with mental health. Identification of important behavioral biomarkers that can track the severity of mental health disorders and identify effective therapeutic behavior can assist the treatment and track the outcomes. We study and develop machine-based methods for automatic assessment of mental health disorders (PTSD, depression).

We also research and develop computational frameworks that to jointly analyze verbal, nonverbal, and dyadic behavior to better predict therapeutic outcome in motivation interviewing.

Representative publications

  • M. Tran, E. Bradley, M. Matvey, J. Woolley, M. Soleymani. Modeling Dynamics of Facial Behavior for Mental Health Assessment. In 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), pp. 1-5. IEEE, 2021.
  • L. Tavabi, L. Stefanov, B. Borsari, J.D. Woolley, S. Scherer, M. Soleymani. Multimodal Automatic Coding of Client Behavior in Motivational Interviewing, ACM Int’l Conference on Multimodal Interaction (ICMI), 2020.
  • F. Ringeval et al. AVEC 2019 workshop and challenge: state-of-mind, detecting depression with AI, and cross-cultural affect recognition. InProceedings of the 9th International on Audio/Visual Emotion Challenge and Workshop 2019 Oct 15 (pp. 3-12). ACM.
  • L. Tavabi. Multimodal Machine Learning for Interactive Mental Health Therapy. In 2019 International Conference on Multimodal Interaction, 2019. ACM.

Multimodal human behavior sensing

OpenSense is a platform for real-time multimodal acquisition and recognition of social signals. OpenSense enables precisely synchronized and coordinated acquisition and processing of human behavioral signals. Powered by the Microsoft’s Platform for Situated Intelligence, OpenSense supports a range of sensor devices and machine learning tools and encourages developers to add new components to the system through straightforward mechanisms for component integration. This platform also offers an intuitive graphical user interface to build application pipelines from existing components. OpenSense is freely available for academic research.

Understanding social and relational from verbal and nonverbal behavior is another topic of interest. We are conducting research to understand the behavior associated with empathy, self-disclosure and rapport, to build nonverbal behavior generation for interactive virtual humans.

Representative publications

  • L. Tavabi, K. Stefanov, S. Nasihati Gilani, D. Traum, M. Soleymani. Multimodal Learning for Identifying Opportunities for Empathetic Responses. In 2019 International Conference on Multimodal Interaction, 2019. ACM.
  • M. Soleymani, K. Stefanov, S-H. Kang, J. Ondras, J. Gratch. Multimodal Analysis and Estimation of Intimate Self-Disclosure. In 2019 International Conference on Multimodal Interaction, 2019. ACM.
  • M. Soleymani, M. Riegler, P. Halvorsen. Multimodal Analysis of Image Search Intent, ACM Int’l Conf. Multimedia Retrieval (ICMR), Bucharest, Romania, 2017.

Multimedia computing

Understanding subjective multimedia attributes can help improve user experience in multimedia retrieval and recommendation systems. We have been active in developing new methods for computational understanding of subjective multimedia attributes, e.g., mood and persuasiveness. Development of such methods requires a thorough understanding of these subjective attributes to pursue the best strategies in creating databases and solid evaluation methodologies. We have developed multimedia databases and evaluation strategies through user-centered data labeling and evaluation at scale through crowdsourcing. We are also interested in attributes that are related to human perception, e.g., melody in music, comprehensiveness in visual content. Recent examples of this track of research include acoustic analysis for music mood recognition and a micro-video retrieval method with implicit sentiment association between visual content and language.

Representative publications

  • Y. Song, M. Soleymani, Polysemous Visual-Semantic Embedding for Cross-Modal Retrieval, Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, 2019.
  • A. Aljanaki, M. Soleymani. A data-driven approach to mid-level perceptual musical feature modeling, 19th Conference of the International Society for Music Information Retrieval (ISMIR 2018), Paris, France, 2018.
  • A. Aljanaki, Y.-H. Yang, M. Soleymani. Developing a Benchmark for Emotional Analysis in Music, PLOS ONE, 12(3):e0173392 , 2017.