No Cover Image

Conference Paper/Proceeding/Abstract 32 views

Raising User Awareness of Bias-Leakage via Proxies in AI Models to Improve Fairness in Decision-making

Peter Daish, Matt Roach Orcid Logo, Alan Dix

AISB Convention 2023, Pages: 86 - 88

Swansea University Authors: Peter Daish, Matt Roach Orcid Logo, Alan Dix

Abstract

Artificial Intelligence systems are becoming more common in decision-making, both for facilitating automated decisions or in tandem with human decision-makers as decisionsupport systems. AI-assisted DSS are typically employed to make data-driven recommendations to human decision-makers in an effort...

Full description

Published in: AISB Convention 2023
ISBN: 978-1-908187-85-7
Published: 2023
URI: https://cronfa.swan.ac.uk/Record/cronfa68368
first_indexed 2024-11-28T13:47:43Z
last_indexed 2024-12-10T20:11:13Z
id cronfa68368
recordtype SURis
fullrecord <?xml version="1.0"?><rfc1807><datestamp>2024-12-10T13:56:11.5687894</datestamp><bib-version>v2</bib-version><id>68368</id><entry>2024-11-28</entry><title>Raising User Awareness of Bias-Leakage via Proxies in AI Models to Improve Fairness in Decision-making</title><swanseaauthors><author><sid>526bb6b1afc3f8acae8bd6a962b107f8</sid><firstname>Peter</firstname><surname>Daish</surname><name>Peter Daish</name><active>true</active><ethesisStudent>false</ethesisStudent></author><author><sid>9722c301d5bbdc96e967cdc629290fec</sid><ORCID>0000-0002-1486-5537</ORCID><firstname>Matt</firstname><surname>Roach</surname><name>Matt Roach</name><active>true</active><ethesisStudent>false</ethesisStudent></author><author><sid>e31e47c578b2a6a39949aa7f149f4cf9</sid><firstname>Alan</firstname><surname>Dix</surname><name>Alan Dix</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2024-11-28</date><abstract>Artificial Intelligence systems are becoming more common in decision-making, both for facilitating automated decisions or in tandem with human decision-makers as decisionsupport systems. AI-assisted DSS are typically employed to make data-driven recommendations to human decision-makers in an effort to improve efficiency and accuracy. In addition, the AI used to power DSS are typically blackbox in nature, meaning that human decision-makers are unaware of exactly how these systems are coming to their conclusions. This is problematic since research in algorithmic fairness already shows that datadriven AI systems can be influenced by social biases present in training data, to reinforce systemic biases and perpetuate unfairness towards minority social groups. When used in highstakes decision-making, such systems risk protracting systemic biases and further driving social division. An area of research is emerging acknowledging that unfairness can leak through &#x2018;proxy&#x2019; features, causing an implicit-bias effect. In this work-in-progress paper, we propose explaining fairness properties of AI systems and their downstream social impacts to decision-makers- by visualising bias-leakage through proxies- for improved fairness. Finally, we are currently in the process of conducting a study to empirically assess how visualising proxy-biases in AI-assisted DSS can affect decision-making and improve fairness.</abstract><type>Conference Paper/Proceeding/Abstract</type><journal>AISB Convention 2023</journal><volume/><journalNumber/><paginationStart>86</paginationStart><paginationEnd>88</paginationEnd><publisher/><placeOfPublication/><isbnPrint>978-1-908187-85-7</isbnPrint><isbnElectronic/><issnPrint/><issnElectronic/><keywords/><publishedDay>13</publishedDay><publishedMonth>4</publishedMonth><publishedYear>2023</publishedYear><publishedDate>2023-04-13</publishedDate><doi/><url/><notes/><college>COLLEGE NANME</college><CollegeCode>COLLEGE CODE</CollegeCode><institution>Swansea University</institution><apcterm/><funders/><projectreference/><lastEdited>2024-12-10T13:56:11.5687894</lastEdited><Created>2024-11-28T11:48:37.9393203</Created><path><level id="1">Faculty of Science and Engineering</level><level id="2">School of Mathematics and Computer Science - Computer Science</level></path><authors><author><firstname>Peter</firstname><surname>Daish</surname><order>1</order></author><author><firstname>Matt</firstname><surname>Roach</surname><orcid>0000-0002-1486-5537</orcid><order>2</order></author><author><firstname>Alan</firstname><surname>Dix</surname><order>3</order></author></authors><documents/><OutputDurs/></rfc1807>
spelling 2024-12-10T13:56:11.5687894 v2 68368 2024-11-28 Raising User Awareness of Bias-Leakage via Proxies in AI Models to Improve Fairness in Decision-making 526bb6b1afc3f8acae8bd6a962b107f8 Peter Daish Peter Daish true false 9722c301d5bbdc96e967cdc629290fec 0000-0002-1486-5537 Matt Roach Matt Roach true false e31e47c578b2a6a39949aa7f149f4cf9 Alan Dix Alan Dix true false 2024-11-28 Artificial Intelligence systems are becoming more common in decision-making, both for facilitating automated decisions or in tandem with human decision-makers as decisionsupport systems. AI-assisted DSS are typically employed to make data-driven recommendations to human decision-makers in an effort to improve efficiency and accuracy. In addition, the AI used to power DSS are typically blackbox in nature, meaning that human decision-makers are unaware of exactly how these systems are coming to their conclusions. This is problematic since research in algorithmic fairness already shows that datadriven AI systems can be influenced by social biases present in training data, to reinforce systemic biases and perpetuate unfairness towards minority social groups. When used in highstakes decision-making, such systems risk protracting systemic biases and further driving social division. An area of research is emerging acknowledging that unfairness can leak through ‘proxy’ features, causing an implicit-bias effect. In this work-in-progress paper, we propose explaining fairness properties of AI systems and their downstream social impacts to decision-makers- by visualising bias-leakage through proxies- for improved fairness. Finally, we are currently in the process of conducting a study to empirically assess how visualising proxy-biases in AI-assisted DSS can affect decision-making and improve fairness. Conference Paper/Proceeding/Abstract AISB Convention 2023 86 88 978-1-908187-85-7 13 4 2023 2023-04-13 COLLEGE NANME COLLEGE CODE Swansea University 2024-12-10T13:56:11.5687894 2024-11-28T11:48:37.9393203 Faculty of Science and Engineering School of Mathematics and Computer Science - Computer Science Peter Daish 1 Matt Roach 0000-0002-1486-5537 2 Alan Dix 3
title Raising User Awareness of Bias-Leakage via Proxies in AI Models to Improve Fairness in Decision-making
spellingShingle Raising User Awareness of Bias-Leakage via Proxies in AI Models to Improve Fairness in Decision-making
Peter Daish
Matt Roach
Alan Dix
title_short Raising User Awareness of Bias-Leakage via Proxies in AI Models to Improve Fairness in Decision-making
title_full Raising User Awareness of Bias-Leakage via Proxies in AI Models to Improve Fairness in Decision-making
title_fullStr Raising User Awareness of Bias-Leakage via Proxies in AI Models to Improve Fairness in Decision-making
title_full_unstemmed Raising User Awareness of Bias-Leakage via Proxies in AI Models to Improve Fairness in Decision-making
title_sort Raising User Awareness of Bias-Leakage via Proxies in AI Models to Improve Fairness in Decision-making
author_id_str_mv 526bb6b1afc3f8acae8bd6a962b107f8
9722c301d5bbdc96e967cdc629290fec
e31e47c578b2a6a39949aa7f149f4cf9
author_id_fullname_str_mv 526bb6b1afc3f8acae8bd6a962b107f8_***_Peter Daish
9722c301d5bbdc96e967cdc629290fec_***_Matt Roach
e31e47c578b2a6a39949aa7f149f4cf9_***_Alan Dix
author Peter Daish
Matt Roach
Alan Dix
author2 Peter Daish
Matt Roach
Alan Dix
format Conference Paper/Proceeding/Abstract
container_title AISB Convention 2023
container_start_page 86
publishDate 2023
institution Swansea University
isbn 978-1-908187-85-7
college_str Faculty of Science and Engineering
hierarchytype
hierarchy_top_id facultyofscienceandengineering
hierarchy_top_title Faculty of Science and Engineering
hierarchy_parent_id facultyofscienceandengineering
hierarchy_parent_title Faculty of Science and Engineering
department_str School of Mathematics and Computer Science - Computer Science{{{_:::_}}}Faculty of Science and Engineering{{{_:::_}}}School of Mathematics and Computer Science - Computer Science
document_store_str 0
active_str 0
description Artificial Intelligence systems are becoming more common in decision-making, both for facilitating automated decisions or in tandem with human decision-makers as decisionsupport systems. AI-assisted DSS are typically employed to make data-driven recommendations to human decision-makers in an effort to improve efficiency and accuracy. In addition, the AI used to power DSS are typically blackbox in nature, meaning that human decision-makers are unaware of exactly how these systems are coming to their conclusions. This is problematic since research in algorithmic fairness already shows that datadriven AI systems can be influenced by social biases present in training data, to reinforce systemic biases and perpetuate unfairness towards minority social groups. When used in highstakes decision-making, such systems risk protracting systemic biases and further driving social division. An area of research is emerging acknowledging that unfairness can leak through ‘proxy’ features, causing an implicit-bias effect. In this work-in-progress paper, we propose explaining fairness properties of AI systems and their downstream social impacts to decision-makers- by visualising bias-leakage through proxies- for improved fairness. Finally, we are currently in the process of conducting a study to empirically assess how visualising proxy-biases in AI-assisted DSS can affect decision-making and improve fairness.
published_date 2023-04-13T05:48:49Z
_version_ 1822108134391414784
score 11.2862625