No Cover Image

Conference Paper/Proceeding/Abstract 684 views

Designing Bias Suppressing Robots for `fair' Robot moderated Human-Human Interactions.

Peter Daish, Takayuki Kanda Orcid Logo, Matt Roach Orcid Logo, Muneeb Ahmad Orcid Logo

Proceedings of the 12th International Conference on Human-Agent Interaction, Pages: 347 - 349

Swansea University Authors: Peter Daish, Matt Roach Orcid Logo, Muneeb Ahmad Orcid Logo

Full text not available from this repository: check for access using links below.

DOI (Published version): 10.1145/3687272.3690877

Abstract

Research has shown that data-driven robots deployed in social settings are likely to unconsciously perpetuate systemic social biases. Despite this, robots can also be deployed to promote fair behaviour in humans. These phenomena have led to the development of two broad sub-disciplines in HRI concern...

Full description

Published in: Proceedings of the 12th International Conference on Human-Agent Interaction
ISBN: 979-8-4007-1178-7 979-8-4007-1178-7
Published: New York, NY, USA ACM 2024
URI: https://cronfa.swan.ac.uk/Record/cronfa68340
first_indexed 2024-11-26T13:51:16Z
last_indexed 2024-12-10T20:11:08Z
id cronfa68340
recordtype SURis
fullrecord <?xml version="1.0"?><rfc1807><datestamp>2024-12-10T14:29:51.7770243</datestamp><bib-version>v2</bib-version><id>68340</id><entry>2024-11-26</entry><title>Designing Bias Suppressing Robots for `fair' Robot moderated Human-Human Interactions.</title><swanseaauthors><author><sid>526bb6b1afc3f8acae8bd6a962b107f8</sid><firstname>Peter</firstname><surname>Daish</surname><name>Peter Daish</name><active>true</active><ethesisStudent>false</ethesisStudent></author><author><sid>9722c301d5bbdc96e967cdc629290fec</sid><ORCID>0000-0002-1486-5537</ORCID><firstname>Matt</firstname><surname>Roach</surname><name>Matt Roach</name><active>true</active><ethesisStudent>false</ethesisStudent></author><author><sid>9c42fd947397b1ad2bfa9107457974d5</sid><ORCID>0000-0001-8111-9967</ORCID><firstname>Muneeb</firstname><surname>Ahmad</surname><name>Muneeb Ahmad</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2024-11-26</date><deptcode>MACS</deptcode><abstract>Research has shown that data-driven robots deployed in social settings are likely to unconsciously perpetuate systemic social biases. Despite this, robots can also be deployed to promote fair behaviour in humans. These phenomena have led to the development of two broad sub-disciplines in HRI concerning &#x2018;fairness&#x2019;: a data-centric approach to ensuring robots operate fairly and a human-centric approach which aims to use robots as interventions to promote fairness in society. To date, these two fields have developed independently, thus it is unknown how data-driven robots can be used to suppress biases in human-human interactions. In this paper, we present a conceptual framework and hypothetical example of how robots might deploy data-driven fairness interventions, to actively suppress social biases in human-human interactions.</abstract><type>Conference Paper/Proceeding/Abstract</type><journal>Proceedings of the 12th International Conference on Human-Agent Interaction</journal><volume/><journalNumber/><paginationStart>347</paginationStart><paginationEnd>349</paginationEnd><publisher>ACM</publisher><placeOfPublication>New York, NY, USA</placeOfPublication><isbnPrint>979-8-4007-1178-7</isbnPrint><isbnElectronic>979-8-4007-1178-7</isbnElectronic><issnPrint/><issnElectronic/><keywords/><publishedDay>24</publishedDay><publishedMonth>11</publishedMonth><publishedYear>2024</publishedYear><publishedDate>2024-11-24</publishedDate><doi>10.1145/3687272.3690877</doi><url/><notes>Poster</notes><college>COLLEGE NANME</college><department>Mathematics and Computer Science School</department><CollegeCode>COLLEGE CODE</CollegeCode><DepartmentCode>MACS</DepartmentCode><institution>Swansea University</institution><apcterm/><funders/><projectreference/><lastEdited>2024-12-10T14:29:51.7770243</lastEdited><Created>2024-11-26T09:47:31.4780359</Created><path><level id="1">Faculty of Science and Engineering</level><level id="2">School of Mathematics and Computer Science - Computer Science</level></path><authors><author><firstname>Peter</firstname><surname>Daish</surname><order>1</order></author><author><firstname>Takayuki</firstname><surname>Kanda</surname><orcid>0000-0002-9546-5825</orcid><order>2</order></author><author><firstname>Matt</firstname><surname>Roach</surname><orcid>0000-0002-1486-5537</orcid><order>3</order></author><author><firstname>Muneeb</firstname><surname>Ahmad</surname><orcid>0000-0001-8111-9967</orcid><order>4</order></author></authors><documents/><OutputDurs/></rfc1807>
spelling 2024-12-10T14:29:51.7770243 v2 68340 2024-11-26 Designing Bias Suppressing Robots for `fair' Robot moderated Human-Human Interactions. 526bb6b1afc3f8acae8bd6a962b107f8 Peter Daish Peter Daish true false 9722c301d5bbdc96e967cdc629290fec 0000-0002-1486-5537 Matt Roach Matt Roach true false 9c42fd947397b1ad2bfa9107457974d5 0000-0001-8111-9967 Muneeb Ahmad Muneeb Ahmad true false 2024-11-26 MACS Research has shown that data-driven robots deployed in social settings are likely to unconsciously perpetuate systemic social biases. Despite this, robots can also be deployed to promote fair behaviour in humans. These phenomena have led to the development of two broad sub-disciplines in HRI concerning ‘fairness’: a data-centric approach to ensuring robots operate fairly and a human-centric approach which aims to use robots as interventions to promote fairness in society. To date, these two fields have developed independently, thus it is unknown how data-driven robots can be used to suppress biases in human-human interactions. In this paper, we present a conceptual framework and hypothetical example of how robots might deploy data-driven fairness interventions, to actively suppress social biases in human-human interactions. Conference Paper/Proceeding/Abstract Proceedings of the 12th International Conference on Human-Agent Interaction 347 349 ACM New York, NY, USA 979-8-4007-1178-7 979-8-4007-1178-7 24 11 2024 2024-11-24 10.1145/3687272.3690877 Poster COLLEGE NANME Mathematics and Computer Science School COLLEGE CODE MACS Swansea University 2024-12-10T14:29:51.7770243 2024-11-26T09:47:31.4780359 Faculty of Science and Engineering School of Mathematics and Computer Science - Computer Science Peter Daish 1 Takayuki Kanda 0000-0002-9546-5825 2 Matt Roach 0000-0002-1486-5537 3 Muneeb Ahmad 0000-0001-8111-9967 4
title Designing Bias Suppressing Robots for `fair' Robot moderated Human-Human Interactions.
spellingShingle Designing Bias Suppressing Robots for `fair' Robot moderated Human-Human Interactions.
Peter Daish
Matt Roach
Muneeb Ahmad
title_short Designing Bias Suppressing Robots for `fair' Robot moderated Human-Human Interactions.
title_full Designing Bias Suppressing Robots for `fair' Robot moderated Human-Human Interactions.
title_fullStr Designing Bias Suppressing Robots for `fair' Robot moderated Human-Human Interactions.
title_full_unstemmed Designing Bias Suppressing Robots for `fair' Robot moderated Human-Human Interactions.
title_sort Designing Bias Suppressing Robots for `fair' Robot moderated Human-Human Interactions.
author_id_str_mv 526bb6b1afc3f8acae8bd6a962b107f8
9722c301d5bbdc96e967cdc629290fec
9c42fd947397b1ad2bfa9107457974d5
author_id_fullname_str_mv 526bb6b1afc3f8acae8bd6a962b107f8_***_Peter Daish
9722c301d5bbdc96e967cdc629290fec_***_Matt Roach
9c42fd947397b1ad2bfa9107457974d5_***_Muneeb Ahmad
author Peter Daish
Matt Roach
Muneeb Ahmad
author2 Peter Daish
Takayuki Kanda
Matt Roach
Muneeb Ahmad
format Conference Paper/Proceeding/Abstract
container_title Proceedings of the 12th International Conference on Human-Agent Interaction
container_start_page 347
publishDate 2024
institution Swansea University
isbn 979-8-4007-1178-7
979-8-4007-1178-7
doi_str_mv 10.1145/3687272.3690877
publisher ACM
college_str Faculty of Science and Engineering
hierarchytype
hierarchy_top_id facultyofscienceandengineering
hierarchy_top_title Faculty of Science and Engineering
hierarchy_parent_id facultyofscienceandengineering
hierarchy_parent_title Faculty of Science and Engineering
department_str School of Mathematics and Computer Science - Computer Science{{{_:::_}}}Faculty of Science and Engineering{{{_:::_}}}School of Mathematics and Computer Science - Computer Science
document_store_str 0
active_str 0
description Research has shown that data-driven robots deployed in social settings are likely to unconsciously perpetuate systemic social biases. Despite this, robots can also be deployed to promote fair behaviour in humans. These phenomena have led to the development of two broad sub-disciplines in HRI concerning ‘fairness’: a data-centric approach to ensuring robots operate fairly and a human-centric approach which aims to use robots as interventions to promote fairness in society. To date, these two fields have developed independently, thus it is unknown how data-driven robots can be used to suppress biases in human-human interactions. In this paper, we present a conceptual framework and hypothetical example of how robots might deploy data-driven fairness interventions, to actively suppress social biases in human-human interactions.
published_date 2024-11-24T05:24:02Z
_version_ 1851369395129942016
score 11.089572