No Cover Image

Journal article 1232 views 398 downloads

Hierarchical Brain Network for Face and Voice Integration of Emotion Expression

Jodie Davies-Thompson Orcid Logo, Giulia V Elli, Mohamed Rezk, Stefania Benetti, Markus van Ackeren, Olivier Collignon

Cerebral Cortex, Volume: 29, Issue: 9, Pages: 3590 - 3605

Swansea University Author: Jodie Davies-Thompson Orcid Logo

Check full text

DOI (Published version): 10.1093/cercor/bhy240

Abstract

The brain has separate specialized computational units to process faces and voices located in occipital and temporal cortices. However, humans seamlessly integrate signals from the faces and voices of others for optimal social interaction. How are emotional expressions, when delivered by different s...

Full description

Published in: Cerebral Cortex
ISSN: 1047-3211 1460-2199
Published: Oxford University Press (OUP) 2019
Online Access: Check full text

URI: https://cronfa.swan.ac.uk/Record/cronfa45285
Tags: Add Tag
No Tags, Be the first to tag this record!
first_indexed 2018-10-29T14:19:03Z
last_indexed 2020-09-09T03:09:08Z
id cronfa45285
recordtype SURis
fullrecord <?xml version="1.0"?><rfc1807><datestamp>2020-09-08T08:17:43.5670953</datestamp><bib-version>v2</bib-version><id>45285</id><entry>2018-10-29</entry><title>Hierarchical Brain Network for Face and Voice Integration of Emotion Expression</title><swanseaauthors><author><sid>0f228cbf8dfc2a66ab1ec4548cfbcd3b</sid><ORCID>0000-0002-9355-4306</ORCID><firstname>Jodie</firstname><surname>Davies-Thompson</surname><name>Jodie Davies-Thompson</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2018-10-29</date><deptcode>HPS</deptcode><abstract>The brain has separate specialized computational units to process faces and voices located in occipital and temporal cortices. However, humans seamlessly integrate signals from the faces and voices of others for optimal social interaction. How are emotional expressions, when delivered by different sensory modalities (faces and voices), integrated in the brain? In this study, we characterized the brains' response to faces, voices, and combined face-voice information (congruent, incongruent), which varied in expression (neutral, fearful). Using a whole-brain approach, we found that only the right posterior superior temporal sulcus (rpSTS) responded more to bimodal stimuli than to face or voice alone but only when the stimuli contained emotional expression. Face- and voice-selective regions of interest, extracted from independent functional localizers, similarly revealed multisensory integration in the face-selective rpSTS only; further, this was the only face-selective region that also responded significantly to voices. Dynamic causal modeling revealed that the rpSTS receives unidirectional information from the face-selective fusiform face area, and voice-selective temporal voice area, with emotional expression affecting the connection strength. Our study promotes a hierarchical model of face and voice integration, with convergence in the rpSTS, and that such integration depends on the (emotional) salience of the stimuli.</abstract><type>Journal Article</type><journal>Cerebral Cortex</journal><volume>29</volume><journalNumber>9</journalNumber><paginationStart>3590</paginationStart><paginationEnd>3605</paginationEnd><publisher>Oxford University Press (OUP)</publisher><issnPrint>1047-3211</issnPrint><issnElectronic>1460-2199</issnElectronic><keywords>emotional expression, faces, fMRI, multisensory, voice</keywords><publishedDay>1</publishedDay><publishedMonth>9</publishedMonth><publishedYear>2019</publishedYear><publishedDate>2019-09-01</publishedDate><doi>10.1093/cercor/bhy240</doi><url/><notes/><college>COLLEGE NANME</college><department>Psychology</department><CollegeCode>COLLEGE CODE</CollegeCode><DepartmentCode>HPS</DepartmentCode><institution>Swansea University</institution><apcterm/><lastEdited>2020-09-08T08:17:43.5670953</lastEdited><Created>2018-10-29T09:59:52.4325686</Created><path><level id="1">Faculty of Medicine, Health and Life Sciences</level><level id="2">School of Psychology</level></path><authors><author><firstname>Jodie</firstname><surname>Davies-Thompson</surname><orcid>0000-0002-9355-4306</orcid><order>1</order></author><author><firstname>Giulia V</firstname><surname>Elli</surname><order>2</order></author><author><firstname>Mohamed</firstname><surname>Rezk</surname><order>3</order></author><author><firstname>Stefania</firstname><surname>Benetti</surname><order>4</order></author><author><firstname>Markus van</firstname><surname>Ackeren</surname><order>5</order></author><author><firstname>Olivier</firstname><surname>Collignon</surname><order>6</order></author></authors><documents><document><filename>0045285-06112018145741.pdf</filename><originalFilename>45285.pdf</originalFilename><uploaded>2018-11-06T14:57:41.1000000</uploaded><type>Output</type><contentLength>11247452</contentLength><contentType>application/pdf</contentType><version>Accepted Manuscript</version><cronfaStatus>true</cronfaStatus><embargoDate>2019-10-01T00:00:00.0000000</embargoDate><copyrightCorrect>true</copyrightCorrect><language>eng</language></document></documents><OutputDurs/></rfc1807>
spelling 2020-09-08T08:17:43.5670953 v2 45285 2018-10-29 Hierarchical Brain Network for Face and Voice Integration of Emotion Expression 0f228cbf8dfc2a66ab1ec4548cfbcd3b 0000-0002-9355-4306 Jodie Davies-Thompson Jodie Davies-Thompson true false 2018-10-29 HPS The brain has separate specialized computational units to process faces and voices located in occipital and temporal cortices. However, humans seamlessly integrate signals from the faces and voices of others for optimal social interaction. How are emotional expressions, when delivered by different sensory modalities (faces and voices), integrated in the brain? In this study, we characterized the brains' response to faces, voices, and combined face-voice information (congruent, incongruent), which varied in expression (neutral, fearful). Using a whole-brain approach, we found that only the right posterior superior temporal sulcus (rpSTS) responded more to bimodal stimuli than to face or voice alone but only when the stimuli contained emotional expression. Face- and voice-selective regions of interest, extracted from independent functional localizers, similarly revealed multisensory integration in the face-selective rpSTS only; further, this was the only face-selective region that also responded significantly to voices. Dynamic causal modeling revealed that the rpSTS receives unidirectional information from the face-selective fusiform face area, and voice-selective temporal voice area, with emotional expression affecting the connection strength. Our study promotes a hierarchical model of face and voice integration, with convergence in the rpSTS, and that such integration depends on the (emotional) salience of the stimuli. Journal Article Cerebral Cortex 29 9 3590 3605 Oxford University Press (OUP) 1047-3211 1460-2199 emotional expression, faces, fMRI, multisensory, voice 1 9 2019 2019-09-01 10.1093/cercor/bhy240 COLLEGE NANME Psychology COLLEGE CODE HPS Swansea University 2020-09-08T08:17:43.5670953 2018-10-29T09:59:52.4325686 Faculty of Medicine, Health and Life Sciences School of Psychology Jodie Davies-Thompson 0000-0002-9355-4306 1 Giulia V Elli 2 Mohamed Rezk 3 Stefania Benetti 4 Markus van Ackeren 5 Olivier Collignon 6 0045285-06112018145741.pdf 45285.pdf 2018-11-06T14:57:41.1000000 Output 11247452 application/pdf Accepted Manuscript true 2019-10-01T00:00:00.0000000 true eng
title Hierarchical Brain Network for Face and Voice Integration of Emotion Expression
spellingShingle Hierarchical Brain Network for Face and Voice Integration of Emotion Expression
Jodie Davies-Thompson
title_short Hierarchical Brain Network for Face and Voice Integration of Emotion Expression
title_full Hierarchical Brain Network for Face and Voice Integration of Emotion Expression
title_fullStr Hierarchical Brain Network for Face and Voice Integration of Emotion Expression
title_full_unstemmed Hierarchical Brain Network for Face and Voice Integration of Emotion Expression
title_sort Hierarchical Brain Network for Face and Voice Integration of Emotion Expression
author_id_str_mv 0f228cbf8dfc2a66ab1ec4548cfbcd3b
author_id_fullname_str_mv 0f228cbf8dfc2a66ab1ec4548cfbcd3b_***_Jodie Davies-Thompson
author Jodie Davies-Thompson
author2 Jodie Davies-Thompson
Giulia V Elli
Mohamed Rezk
Stefania Benetti
Markus van Ackeren
Olivier Collignon
format Journal article
container_title Cerebral Cortex
container_volume 29
container_issue 9
container_start_page 3590
publishDate 2019
institution Swansea University
issn 1047-3211
1460-2199
doi_str_mv 10.1093/cercor/bhy240
publisher Oxford University Press (OUP)
college_str Faculty of Medicine, Health and Life Sciences
hierarchytype
hierarchy_top_id facultyofmedicinehealthandlifesciences
hierarchy_top_title Faculty of Medicine, Health and Life Sciences
hierarchy_parent_id facultyofmedicinehealthandlifesciences
hierarchy_parent_title Faculty of Medicine, Health and Life Sciences
department_str School of Psychology{{{_:::_}}}Faculty of Medicine, Health and Life Sciences{{{_:::_}}}School of Psychology
document_store_str 1
active_str 0
description The brain has separate specialized computational units to process faces and voices located in occipital and temporal cortices. However, humans seamlessly integrate signals from the faces and voices of others for optimal social interaction. How are emotional expressions, when delivered by different sensory modalities (faces and voices), integrated in the brain? In this study, we characterized the brains' response to faces, voices, and combined face-voice information (congruent, incongruent), which varied in expression (neutral, fearful). Using a whole-brain approach, we found that only the right posterior superior temporal sulcus (rpSTS) responded more to bimodal stimuli than to face or voice alone but only when the stimuli contained emotional expression. Face- and voice-selective regions of interest, extracted from independent functional localizers, similarly revealed multisensory integration in the face-selective rpSTS only; further, this was the only face-selective region that also responded significantly to voices. Dynamic causal modeling revealed that the rpSTS receives unidirectional information from the face-selective fusiform face area, and voice-selective temporal voice area, with emotional expression affecting the connection strength. Our study promotes a hierarchical model of face and voice integration, with convergence in the rpSTS, and that such integration depends on the (emotional) salience of the stimuli.
published_date 2019-09-01T03:57:01Z
_version_ 1763752879403827200
score 11.036706