Conference Paper/Proceeding/Abstract 401 views
A position on establishing effective explanations from human-centred counterfactuals for automated financial decisions
Proceedings of the AISB 2023 Convention, Swansea University, Swansea, Wales, April 13-14 2023, Pages: 89 - 97
Swansea University Authors: Daniele Doneddu , Matt Roach , Matt Jones , Jen Pearson , Alex Blandin
Abstract
There has been a historic focus among explainable artificial intelligence practitioners to increase user trust through the provision of explanation traces from algorithmic decisions, either by interpretable agents and expert systems or from transparent surrogate models. Beyond this deterministic cau...
Published in: | Proceedings of the AISB 2023 Convention, Swansea University, Swansea, Wales, April 13-14 2023 |
---|---|
ISBN: | 978-1-908187-85-7 |
Published: |
AISB
2023
|
Online Access: |
https://aisb.org.uk/wp-content/uploads/2023/05/aisb2023.pdf |
URI: | https://cronfa.swan.ac.uk/Record/cronfa64477 |
first_indexed |
2023-09-08T10:13:17Z |
---|---|
last_indexed |
2024-11-25T14:14:05Z |
id |
cronfa64477 |
recordtype |
SURis |
fullrecord |
<?xml version="1.0"?><rfc1807><datestamp>2024-06-27T13:37:02.4215425</datestamp><bib-version>v2</bib-version><id>64477</id><entry>2023-09-08</entry><title>A position on establishing effective explanations from human-centred counterfactuals for automated financial decisions</title><swanseaauthors><author><sid>b1b5db525b5dbd5713e33d143f3d5d60</sid><ORCID>0000-0003-2173-302X</ORCID><firstname>Daniele</firstname><surname>Doneddu</surname><name>Daniele Doneddu</name><active>true</active><ethesisStudent>false</ethesisStudent></author><author><sid>9722c301d5bbdc96e967cdc629290fec</sid><ORCID>0000-0002-1486-5537</ORCID><firstname>Matt</firstname><surname>Roach</surname><name>Matt Roach</name><active>true</active><ethesisStudent>false</ethesisStudent></author><author><sid>10b46d7843c2ba53d116ca2ed9abb56e</sid><ORCID>0000-0001-7657-7373</ORCID><firstname>Matt</firstname><surname>Jones</surname><name>Matt Jones</name><active>true</active><ethesisStudent>false</ethesisStudent></author><author><sid>6d662d9e2151b302ed384b243e2a802f</sid><ORCID>0000-0002-1960-1012</ORCID><firstname>Jen</firstname><surname>Pearson</surname><name>Jen Pearson</name><active>true</active><ethesisStudent>false</ethesisStudent></author><author><sid>c21144b9e53f16ca8354576250c9562d</sid><firstname>Alex</firstname><surname>Blandin</surname><name>Alex Blandin</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2023-09-08</date><deptcode>CBAE</deptcode><abstract>There has been a historic focus among explainable artificial intelligence practitioners to increase user trust through the provision of explanation traces from algorithmic decisions, either by interpretable agents and expert systems or from transparent surrogate models. Beyond this deterministic causal reasoning, significant developments were made in probabilistic post-hoc explanations, often presented as percentage confidence or importance and contribution. However, simultaneous work in the social sciences revealed that typical users and experts both preferred and generated explanations that differed in conception with those often employed on artificial intelligence, with the existing deterministic traces and probabilistic reasoning being found less satisfactory, particularly when the best explanations for a given user were not the most likely.In this piece, we hold the position that incorporating an understanding of explanations as a model and process — inspired by social science research in human-centred explanations — will improve user satisfaction and trust in both the given decision and the overall model. We consider how practitioners may design explainable artificial intelligence that enables typical users to interface and interact with counterfactual explanations and therein develop more appropriate explanations for that specific user and decision. Specifically, we argue in favour of satisfying design desiderata for explanations that are causal, contrastive, contextual, and interactively selected with the user. This piece is based on ongoing work that demonstrates the practicability of interactive human-centred explanatory models and is inspired by previous work to uncover design characteristics for counterfac- tual explanations that enhance user trust and understanding in algorithmic decisions for automated financial decisions.</abstract><type>Conference Paper/Proceeding/Abstract</type><journal>Proceedings of the AISB 2023 Convention, Swansea University, Swansea, Wales, April 13-14 2023</journal><volume/><journalNumber/><paginationStart>89</paginationStart><paginationEnd>97</paginationEnd><publisher>AISB</publisher><placeOfPublication/><isbnPrint/><isbnElectronic>978-1-908187-85-7</isbnElectronic><issnPrint/><issnElectronic/><keywords>Explainable AI, Human-centred XAI, Human Computer Interaction, Counterfactual Explanations</keywords><publishedDay>14</publishedDay><publishedMonth>4</publishedMonth><publishedYear>2023</publishedYear><publishedDate>2023-04-14</publishedDate><doi/><url>https://aisb.org.uk/wp-content/uploads/2023/05/aisb2023.pdf</url><notes>Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB) Convention 2023, Swansea University, Swansea, Wales, April 13-14 2023</notes><college>COLLEGE NANME</college><department>Management School</department><CollegeCode>COLLEGE CODE</CollegeCode><DepartmentCode>CBAE</DepartmentCode><institution>Swansea University</institution><apcterm>Another institution paid the OA fee</apcterm><funders>EPSRC</funders><projectreference>EP/S021892/1</projectreference><lastEdited>2024-06-27T13:37:02.4215425</lastEdited><Created>2023-09-08T10:56:20.5794081</Created><path><level id="1">Faculty of Science and Engineering</level><level id="2">School of Mathematics and Computer Science - Computer Science</level></path><authors><author><firstname>Daniele</firstname><surname>Doneddu</surname><orcid>0000-0003-2173-302X</orcid><order>1</order></author><author><firstname>Matt</firstname><surname>Roach</surname><orcid>0000-0002-1486-5537</orcid><order>2</order></author><author><firstname>Matt</firstname><surname>Jones</surname><orcid>0000-0001-7657-7373</orcid><order>3</order></author><author><firstname>Jen</firstname><surname>Pearson</surname><orcid>0000-0002-1960-1012</orcid><order>4</order></author><author><firstname>Alex</firstname><surname>Blandin</surname><order>5</order></author><author><firstname>David</firstname><surname>Sullivan</surname><order>6</order></author></authors><documents/><OutputDurs/></rfc1807> |
spelling |
2024-06-27T13:37:02.4215425 v2 64477 2023-09-08 A position on establishing effective explanations from human-centred counterfactuals for automated financial decisions b1b5db525b5dbd5713e33d143f3d5d60 0000-0003-2173-302X Daniele Doneddu Daniele Doneddu true false 9722c301d5bbdc96e967cdc629290fec 0000-0002-1486-5537 Matt Roach Matt Roach true false 10b46d7843c2ba53d116ca2ed9abb56e 0000-0001-7657-7373 Matt Jones Matt Jones true false 6d662d9e2151b302ed384b243e2a802f 0000-0002-1960-1012 Jen Pearson Jen Pearson true false c21144b9e53f16ca8354576250c9562d Alex Blandin Alex Blandin true false 2023-09-08 CBAE There has been a historic focus among explainable artificial intelligence practitioners to increase user trust through the provision of explanation traces from algorithmic decisions, either by interpretable agents and expert systems or from transparent surrogate models. Beyond this deterministic causal reasoning, significant developments were made in probabilistic post-hoc explanations, often presented as percentage confidence or importance and contribution. However, simultaneous work in the social sciences revealed that typical users and experts both preferred and generated explanations that differed in conception with those often employed on artificial intelligence, with the existing deterministic traces and probabilistic reasoning being found less satisfactory, particularly when the best explanations for a given user were not the most likely.In this piece, we hold the position that incorporating an understanding of explanations as a model and process — inspired by social science research in human-centred explanations — will improve user satisfaction and trust in both the given decision and the overall model. We consider how practitioners may design explainable artificial intelligence that enables typical users to interface and interact with counterfactual explanations and therein develop more appropriate explanations for that specific user and decision. Specifically, we argue in favour of satisfying design desiderata for explanations that are causal, contrastive, contextual, and interactively selected with the user. This piece is based on ongoing work that demonstrates the practicability of interactive human-centred explanatory models and is inspired by previous work to uncover design characteristics for counterfac- tual explanations that enhance user trust and understanding in algorithmic decisions for automated financial decisions. Conference Paper/Proceeding/Abstract Proceedings of the AISB 2023 Convention, Swansea University, Swansea, Wales, April 13-14 2023 89 97 AISB 978-1-908187-85-7 Explainable AI, Human-centred XAI, Human Computer Interaction, Counterfactual Explanations 14 4 2023 2023-04-14 https://aisb.org.uk/wp-content/uploads/2023/05/aisb2023.pdf Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB) Convention 2023, Swansea University, Swansea, Wales, April 13-14 2023 COLLEGE NANME Management School COLLEGE CODE CBAE Swansea University Another institution paid the OA fee EPSRC EP/S021892/1 2024-06-27T13:37:02.4215425 2023-09-08T10:56:20.5794081 Faculty of Science and Engineering School of Mathematics and Computer Science - Computer Science Daniele Doneddu 0000-0003-2173-302X 1 Matt Roach 0000-0002-1486-5537 2 Matt Jones 0000-0001-7657-7373 3 Jen Pearson 0000-0002-1960-1012 4 Alex Blandin 5 David Sullivan 6 |
title |
A position on establishing effective explanations from human-centred counterfactuals for automated financial decisions |
spellingShingle |
A position on establishing effective explanations from human-centred counterfactuals for automated financial decisions Daniele Doneddu Matt Roach Matt Jones Jen Pearson Alex Blandin |
title_short |
A position on establishing effective explanations from human-centred counterfactuals for automated financial decisions |
title_full |
A position on establishing effective explanations from human-centred counterfactuals for automated financial decisions |
title_fullStr |
A position on establishing effective explanations from human-centred counterfactuals for automated financial decisions |
title_full_unstemmed |
A position on establishing effective explanations from human-centred counterfactuals for automated financial decisions |
title_sort |
A position on establishing effective explanations from human-centred counterfactuals for automated financial decisions |
author_id_str_mv |
b1b5db525b5dbd5713e33d143f3d5d60 9722c301d5bbdc96e967cdc629290fec 10b46d7843c2ba53d116ca2ed9abb56e 6d662d9e2151b302ed384b243e2a802f c21144b9e53f16ca8354576250c9562d |
author_id_fullname_str_mv |
b1b5db525b5dbd5713e33d143f3d5d60_***_Daniele Doneddu 9722c301d5bbdc96e967cdc629290fec_***_Matt Roach 10b46d7843c2ba53d116ca2ed9abb56e_***_Matt Jones 6d662d9e2151b302ed384b243e2a802f_***_Jen Pearson c21144b9e53f16ca8354576250c9562d_***_Alex Blandin |
author |
Daniele Doneddu Matt Roach Matt Jones Jen Pearson Alex Blandin |
author2 |
Daniele Doneddu Matt Roach Matt Jones Jen Pearson Alex Blandin David Sullivan |
format |
Conference Paper/Proceeding/Abstract |
container_title |
Proceedings of the AISB 2023 Convention, Swansea University, Swansea, Wales, April 13-14 2023 |
container_start_page |
89 |
publishDate |
2023 |
institution |
Swansea University |
isbn |
978-1-908187-85-7 |
publisher |
AISB |
college_str |
Faculty of Science and Engineering |
hierarchytype |
|
hierarchy_top_id |
facultyofscienceandengineering |
hierarchy_top_title |
Faculty of Science and Engineering |
hierarchy_parent_id |
facultyofscienceandengineering |
hierarchy_parent_title |
Faculty of Science and Engineering |
department_str |
School of Mathematics and Computer Science - Computer Science{{{_:::_}}}Faculty of Science and Engineering{{{_:::_}}}School of Mathematics and Computer Science - Computer Science |
url |
https://aisb.org.uk/wp-content/uploads/2023/05/aisb2023.pdf |
document_store_str |
0 |
active_str |
0 |
description |
There has been a historic focus among explainable artificial intelligence practitioners to increase user trust through the provision of explanation traces from algorithmic decisions, either by interpretable agents and expert systems or from transparent surrogate models. Beyond this deterministic causal reasoning, significant developments were made in probabilistic post-hoc explanations, often presented as percentage confidence or importance and contribution. However, simultaneous work in the social sciences revealed that typical users and experts both preferred and generated explanations that differed in conception with those often employed on artificial intelligence, with the existing deterministic traces and probabilistic reasoning being found less satisfactory, particularly when the best explanations for a given user were not the most likely.In this piece, we hold the position that incorporating an understanding of explanations as a model and process — inspired by social science research in human-centred explanations — will improve user satisfaction and trust in both the given decision and the overall model. We consider how practitioners may design explainable artificial intelligence that enables typical users to interface and interact with counterfactual explanations and therein develop more appropriate explanations for that specific user and decision. Specifically, we argue in favour of satisfying design desiderata for explanations that are causal, contrastive, contextual, and interactively selected with the user. This piece is based on ongoing work that demonstrates the practicability of interactive human-centred explanatory models and is inspired by previous work to uncover design characteristics for counterfac- tual explanations that enhance user trust and understanding in algorithmic decisions for automated financial decisions. |
published_date |
2023-04-14T02:44:29Z |
_version_ |
1822005939947962368 |
score |
11.048042 |