No Cover Image

Conference Paper/Proceeding/Abstract 183 views

A position on establishing effective explanations from human-centred counterfactuals for automated financial decisions

Daniele Doneddu Orcid Logo, Matt Roach Orcid Logo, Matt Jones Orcid Logo, Jen Pearson Orcid Logo, Alex Blandin, David Sullivan

Proceedings of the AISB 2023 Convention, Swansea University, Swansea, Wales, April 13-14 2023, Pages: 89 - 97

Swansea University Authors: Daniele Doneddu Orcid Logo, Matt Roach Orcid Logo, Matt Jones Orcid Logo, Jen Pearson Orcid Logo, Alex Blandin

Abstract

There has been a historic focus among explainable artificial intelligence practitioners to increase user trust through the provision of explanation traces from algorithmic decisions, either by interpretable agents and expert systems or from transparent surrogate models. Beyond this deterministic cau...

Full description

Published in: Proceedings of the AISB 2023 Convention, Swansea University, Swansea, Wales, April 13-14 2023
ISBN: 978-1-908187-85-7
Published: AISB 2023
Online Access: https://aisb.org.uk/wp-content/uploads/2023/05/aisb2023.pdf
URI: https://cronfa.swan.ac.uk/Record/cronfa64477
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract: There has been a historic focus among explainable artificial intelligence practitioners to increase user trust through the provision of explanation traces from algorithmic decisions, either by interpretable agents and expert systems or from transparent surrogate models. Beyond this deterministic causal reasoning, significant developments were made in probabilistic post-hoc explanations, often presented as percentage confidence or importance and contribution. However, simultaneous work in the social sciences revealed that typical users and experts both preferred and generated explanations that differed in conception with those often employed on artificial intelligence, with the existing deterministic traces and probabilistic reasoning being found less satisfactory, particularly when the best explanations for a given user were not the most likely.In this piece, we hold the position that incorporating an understanding of explanations as a model and process — inspired by social science research in human-centred explanations — will improve user satisfaction and trust in both the given decision and the overall model. We consider how practitioners may design explainable artificial intelligence that enables typical users to interface and interact with counterfactual explanations and therein develop more appropriate explanations for that specific user and decision. Specifically, we argue in favour of satisfying design desiderata for explanations that are causal, contrastive, contextual, and interactively selected with the user. This piece is based on ongoing work that demonstrates the practicability of interactive human-centred explanatory models and is inspired by previous work to uncover design characteristics for counterfac- tual explanations that enhance user trust and understanding in algorithmic decisions for automated financial decisions.
Item Description: Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB) Convention 2023, Swansea University, Swansea, Wales, April 13-14 2023
Keywords: Explainable AI, Human-centred XAI, Human Computer Interaction, Counterfactual Explanations
College: Faculty of Science and Engineering
Funders: EPSRC
Start Page: 89
End Page: 97