Modulating Cognitive Models of Emotional Intelligence

Apply For a PhD on this topic
via the CDT in Socially Intelligent Artificial Agents


Collaborators: Prof. Frank Pollick


 


State-of-the-art artificial intelligent (AI) systems mimic how the brain processes information to develop systems with unprecedented accuracy and performance in accomplishing tasks such as objectface recognition and textspeech translation. However, one key characteristic that defines human success is emotional intelligence. Empathy, the ability to understand others’ people feelings and emotionally reflect upon them, shapes social interaction and it is important in both personal and professional success. Although, some progress has been achieved in developing systems that detect emotions based on facial expressions and physiological data, a way of relating and reflecting upon them is far more challenging. Therefore, understanding how empathy/emotional responses emerge via complex information processing between key brain regions is of paramount importance to develop emotionally-aware AI agents.

In this project, we will exploit real-time functional Magnetic Resonance Imaging (fMRI) neurofeedback techniques to build cognitive models that explain modulation of brain activity in key regions related to empathy and emotions. For example, anterior insula is a brain region located in deep gray matter and it has been consistently implicated in empathy/emotional responses and abnormal emotional processing observed in several disorders such as Autism Spectrum Disorder and misophonia. Neurofeedback has shown promising results in regulating the activity of anterior insula and it could enable therapeutic training techniques (Kanel et al. 2019).

This approach would extract how brain regions interact during neuromodulation and allow cognitive models to emerge in real-time. Subsequently, to allow training in more naturalistic environments we suggest cross-domain learning between fMRI and EEG. The motivation behind this is that whereas fMRI is the gold standard imaging technique for deep gray matter structures it is limited by the lack of portability, comfort in use and low temporal resolution (Deligianni et al. 2014). On the other hand, advances in wearable EEG technology show promising results in the use of the device beyond well-controlled lab experiments. Toward this end advanced machine learning algorithms based on representation learning and domain generalisation would be developed. Domain/Model generalisation in deep learning aims to learn generalised features and extract representations in an ‘unseen’ target domain by eliminating bias observed via multiple source domain data (Volpi et al. 2018) .

Summarising, the overall aims of the project are:

  • To build data-driven cognitive models of real-time brain network interaction during emotional modulation via neurofeedback techniques.

  • To develop advanced machine learning algorithm to perform cross-domain learning between fMRI and EEG.

  • To develop intelligent artificial agents based on portable EEG systems to successfully regulate emotional responses, taking into account cognitive models derived in the fMRI scanner.

Related Publications

  • Deligianni et al. ‘Relating resting-state fMRI and EEG whole-brain connectomes across frequency bands’, Frontiers in Neuroscience, 8(258), 2014.

  • Kanel et al. ‘Empathy to emotional voices and the use of real-time fMRI to enhance activation of the anterior insula’, NeuroImage, 198, 2019.

  • Kumar et al. ‘The Brain Basis for Misophonia’, Current Biology, 27(4), 2017.

  • Volpi et al. ‘Generalizing to Unseen Domains via Adversarial Data Augmentation’, Neural Information Processing Systems, 2018.