No Cover Image

Journal article 640 views 54 downloads

Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it

Alice Liefgreen, Netta Weinstein, Sandra Wachter, Brent Mittelstadt

AI & Society, Volume: 39, Pages: 2183 - 2199

Swansea University Author: Alice Liefgreen

  • 63376.pdf

    PDF | Version of Record

    © The Author(s) 2023. Distributed under the terms of a Creative Commons Attribution 4.0 License (CC BY 4.0).

    Download (856.83KB)

Abstract

Artificial intelligence (AI) is increasingly relied upon by clinicians for making diagnostic and treatment decisions, playing an important role in imaging, diagnosis, risk analysis, lifestyle monitoring, and health information management. While research has identified biases in healthcare AI systems...

Full description

Published in: AI & Society
ISSN: 0951-5666 1435-5655
Published: Springer Nature 2024
Online Access: Check full text

URI: https://cronfa.swan.ac.uk/Record/cronfa63376
Tags: Add Tag
No Tags, Be the first to tag this record!
first_indexed 2023-05-23T13:31:59Z
last_indexed 2023-05-23T13:31:59Z
id cronfa63376
recordtype SURis
fullrecord <?xml version="1.0" encoding="utf-8"?><rfc1807 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><bib-version>v2</bib-version><id>63376</id><entry>2023-05-09</entry><title>Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it</title><swanseaauthors><author><sid>5a11aaeb0cd68f36ec54c5534dc541bd</sid><firstname>Alice</firstname><surname>Liefgreen</surname><name>Alice Liefgreen</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2023-05-09</date><abstract>Artificial intelligence (AI) is increasingly relied upon by clinicians for making diagnostic and treatment decisions, playing an important role in imaging, diagnosis, risk analysis, lifestyle monitoring, and health information management. While research has identified biases in healthcare AI systems and proposed technical solutions to address these, we argue that effective solutions require human engagement. Furthermore, there is a lack of research on how to motivate the adoption of these solutions and promote investment in designing AI systems that align with values such as transparency and fairness from the outset. Drawing on insights from psychological theories, we assert the need to understand the values that underlie decisions made by individuals involved in creating and deploying AI systems. We describe how this understanding can be leveraged to increase engagement with de-biasing and fairness-enhancing practices within the AI healthcare industry, ultimately leading to sustained behavioral change via autonomy-supportive communication strategies rooted in motivational and social psychology theories. In developing these pathways to engagement, we consider the norms and needs that govern the AI healthcare domain, and we evaluate incentives for maintaining the status quo against economic, legal, and social incentives for behavior change in line with transparency and fairness values.</abstract><type>Journal Article</type><journal>AI &amp; Society</journal><volume>39</volume><journalNumber/><paginationStart>2183</paginationStart><paginationEnd>2199</paginationEnd><publisher>Springer Nature</publisher><placeOfPublication/><isbnPrint/><isbnElectronic/><issnPrint>0951-5666</issnPrint><issnElectronic>1435-5655</issnElectronic><keywords>Artificial intelligence, Healthcare, Medicine, Fairness, Bias, Motivation, Behaviour change</keywords><publishedDay>1</publishedDay><publishedMonth>10</publishedMonth><publishedYear>2024</publishedYear><publishedDate>2024-10-01</publishedDate><doi>10.1007/s00146-023-01684-3</doi><url/><notes/><college>COLLEGE NANME</college><CollegeCode>COLLEGE CODE</CollegeCode><institution>Swansea University</institution><apcterm>SU Library paid the OA fee (TA Institutional Deal)</apcterm><funders>Wellcome Trust ( 223765/Z/21/Z), Alfred P. Sloan Foundation (G-2021-16779), Department of Health and Social Care, British Academy ( PF2\180114), Luminate Group, Miami Foundation</funders><projectreference/><lastEdited>2024-10-25T15:27:23.7069300</lastEdited><Created>2023-05-09T14:26:30.1639387</Created><path><level id="1">Faculty of Humanities and Social Sciences</level><level id="2">Hilary Rodham Clinton School of Law</level></path><authors><author><firstname>Alice</firstname><surname>Liefgreen</surname><order>1</order></author><author><firstname>Netta</firstname><surname>Weinstein</surname><order>2</order></author><author><firstname>Sandra</firstname><surname>Wachter</surname><order>3</order></author><author><firstname>Brent</firstname><surname>Mittelstadt</surname><order>4</order></author></authors><documents><document><filename>63376__27605__65a85fa701fb403ba0b91329945432ee.pdf</filename><originalFilename>63376.pdf</originalFilename><uploaded>2023-05-24T16:11:04.5680867</uploaded><type>Output</type><contentLength>877390</contentLength><contentType>application/pdf</contentType><version>Version of Record</version><cronfaStatus>true</cronfaStatus><documentNotes>© The Author(s) 2023. Distributed under the terms of a Creative Commons Attribution 4.0 License (CC BY 4.0).</documentNotes><copyrightCorrect>true</copyrightCorrect><language>eng</language><licence>http://creativecommons.org/licenses/by/4.0/</licence></document></documents><OutputDurs/></rfc1807>
spelling v2 63376 2023-05-09 Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it 5a11aaeb0cd68f36ec54c5534dc541bd Alice Liefgreen Alice Liefgreen true false 2023-05-09 Artificial intelligence (AI) is increasingly relied upon by clinicians for making diagnostic and treatment decisions, playing an important role in imaging, diagnosis, risk analysis, lifestyle monitoring, and health information management. While research has identified biases in healthcare AI systems and proposed technical solutions to address these, we argue that effective solutions require human engagement. Furthermore, there is a lack of research on how to motivate the adoption of these solutions and promote investment in designing AI systems that align with values such as transparency and fairness from the outset. Drawing on insights from psychological theories, we assert the need to understand the values that underlie decisions made by individuals involved in creating and deploying AI systems. We describe how this understanding can be leveraged to increase engagement with de-biasing and fairness-enhancing practices within the AI healthcare industry, ultimately leading to sustained behavioral change via autonomy-supportive communication strategies rooted in motivational and social psychology theories. In developing these pathways to engagement, we consider the norms and needs that govern the AI healthcare domain, and we evaluate incentives for maintaining the status quo against economic, legal, and social incentives for behavior change in line with transparency and fairness values. Journal Article AI & Society 39 2183 2199 Springer Nature 0951-5666 1435-5655 Artificial intelligence, Healthcare, Medicine, Fairness, Bias, Motivation, Behaviour change 1 10 2024 2024-10-01 10.1007/s00146-023-01684-3 COLLEGE NANME COLLEGE CODE Swansea University SU Library paid the OA fee (TA Institutional Deal) Wellcome Trust ( 223765/Z/21/Z), Alfred P. Sloan Foundation (G-2021-16779), Department of Health and Social Care, British Academy ( PF2\180114), Luminate Group, Miami Foundation 2024-10-25T15:27:23.7069300 2023-05-09T14:26:30.1639387 Faculty of Humanities and Social Sciences Hilary Rodham Clinton School of Law Alice Liefgreen 1 Netta Weinstein 2 Sandra Wachter 3 Brent Mittelstadt 4 63376__27605__65a85fa701fb403ba0b91329945432ee.pdf 63376.pdf 2023-05-24T16:11:04.5680867 Output 877390 application/pdf Version of Record true © The Author(s) 2023. Distributed under the terms of a Creative Commons Attribution 4.0 License (CC BY 4.0). true eng http://creativecommons.org/licenses/by/4.0/
title Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it
spellingShingle Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it
Alice Liefgreen
title_short Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it
title_full Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it
title_fullStr Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it
title_full_unstemmed Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it
title_sort Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it
author_id_str_mv 5a11aaeb0cd68f36ec54c5534dc541bd
author_id_fullname_str_mv 5a11aaeb0cd68f36ec54c5534dc541bd_***_Alice Liefgreen
author Alice Liefgreen
author2 Alice Liefgreen
Netta Weinstein
Sandra Wachter
Brent Mittelstadt
format Journal article
container_title AI & Society
container_volume 39
container_start_page 2183
publishDate 2024
institution Swansea University
issn 0951-5666
1435-5655
doi_str_mv 10.1007/s00146-023-01684-3
publisher Springer Nature
college_str Faculty of Humanities and Social Sciences
hierarchytype
hierarchy_top_id facultyofhumanitiesandsocialsciences
hierarchy_top_title Faculty of Humanities and Social Sciences
hierarchy_parent_id facultyofhumanitiesandsocialsciences
hierarchy_parent_title Faculty of Humanities and Social Sciences
department_str Hilary Rodham Clinton School of Law{{{_:::_}}}Faculty of Humanities and Social Sciences{{{_:::_}}}Hilary Rodham Clinton School of Law
document_store_str 1
active_str 0
description Artificial intelligence (AI) is increasingly relied upon by clinicians for making diagnostic and treatment decisions, playing an important role in imaging, diagnosis, risk analysis, lifestyle monitoring, and health information management. While research has identified biases in healthcare AI systems and proposed technical solutions to address these, we argue that effective solutions require human engagement. Furthermore, there is a lack of research on how to motivate the adoption of these solutions and promote investment in designing AI systems that align with values such as transparency and fairness from the outset. Drawing on insights from psychological theories, we assert the need to understand the values that underlie decisions made by individuals involved in creating and deploying AI systems. We describe how this understanding can be leveraged to increase engagement with de-biasing and fairness-enhancing practices within the AI healthcare industry, ultimately leading to sustained behavioral change via autonomy-supportive communication strategies rooted in motivational and social psychology theories. In developing these pathways to engagement, we consider the norms and needs that govern the AI healthcare domain, and we evaluate incentives for maintaining the status quo against economic, legal, and social incentives for behavior change in line with transparency and fairness values.
published_date 2024-10-01T15:27:21Z
_version_ 1813896434268766208
score 11.035634