No Cover Image

Conference Paper/Proceeding/Abstract 113 views 5 downloads

Enhancing Fairness, Justice and Accuracy of Hybrid Human-AI Decisions by Shifting Epistemological Stances

Peter Daish, Matt Roach Orcid Logo, Alan Dix

Communications in Computer and Information Science, Volume: 1, Pages: 323 - 331

Swansea University Authors: Peter Daish, Matt Roach Orcid Logo, Alan Dix

  • 68367.pdf

    PDF | Accepted Manuscript

    Author accepted manuscript document released under the terms of a Creative Commons CC-BY licence using the Swansea University Research Publications Policy (rights retention).

    Download (408.33KB)

Abstract

From applications in automating credit to aiding judges in presiding over cases of recidivism, deep-learning powered AI systems are becoming embedded in high-stakes decision-making processes as either primary decision-makers or supportive assistants to humans in a hybrid decision-making context, wit...

Full description

Published in: Communications in Computer and Information Science
ISBN: 9783031746260 9783031746277
ISSN: 1865-0929 1865-0937
Published: Cham Springer Nature Switzerland 2025
Online Access: Check full text

URI: https://cronfa.swan.ac.uk/Record/cronfa68367
Abstract: From applications in automating credit to aiding judges in presiding over cases of recidivism, deep-learning powered AI systems are becoming embedded in high-stakes decision-making processes as either primary decision-makers or supportive assistants to humans in a hybrid decision-making context, with the aim of improving the quality of decisions. However, the criteria currently used to assess a system’s ability to improve hybrid decisions is driven by a utilitarian desire to optimise accuracy through a phenomenon known as ‘complementary performance’. This desire puts the design of hybrid decision-making at odds with critical subjective concepts that affect the perception and acceptance of decisions, such as fairness. Fairness as a subjective notion often has a competitive relationship with accuracy and as such, driving complementary behaviour with a utilitarian belief risks driving unfairness in decisions. It is our position that shifting epistemological stances taken in the research and design of human-AI environments is necessary to incorporate the relationship between fairness and accuracy into the notion of ‘complementary behaviour’, in order to observe ‘enhanced’ hybrid human-AI decisions.
College: Faculty of Science and Engineering
Start Page: 323
End Page: 331