No Cover Image

Conference Paper/Proceeding/Abstract 172 views

To err is AI

Alba Bisante Orcid Logo, Alan Dix Orcid Logo, Emanuele Panizzi Orcid Logo, Stefano Zeppieri Orcid Logo

CHItaly '23: Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter, Pages: 1 - 11

Swansea University Author: Alan Dix Orcid Logo

Full text not available from this repository: check for access using links below.

DOI (Published version): 10.1145/3605390.3605414

Abstract

In this work, we analyze the different contexts in which one chooses to integrate artificial intelligence into an interface and the implications of this choice in managing user interaction. While AI in systems can provide significant benefits, it is not infallible and can make errors that seriously...

Full description

Published in: CHItaly '23: Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter
ISBN: 979-8-4007-0806-0
Published: New York, NY, USA ACM 2023
Online Access: http://dx.doi.org/10.1145/3605390.3605414
URI: https://cronfa.swan.ac.uk/Record/cronfa64783
Tags: Add Tag
No Tags, Be the first to tag this record!
first_indexed 2023-10-20T08:02:07Z
last_indexed 2023-10-20T08:02:07Z
id cronfa64783
recordtype SURis
fullrecord <?xml version="1.0" encoding="utf-8"?><rfc1807 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><bib-version>v2</bib-version><id>64783</id><entry>2023-10-20</entry><title>To err is AI</title><swanseaauthors><author><sid>e31e47c578b2a6a39949aa7f149f4cf9</sid><ORCID>0000-0002-5242-7693</ORCID><firstname>Alan</firstname><surname>Dix</surname><name>Alan Dix</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2023-10-20</date><deptcode>SCS</deptcode><abstract>In this work, we analyze the different contexts in which one chooses to integrate artificial intelligence into an interface and the implications of this choice in managing user interaction. While AI in systems can provide significant benefits, it is not infallible and can make errors that seriously affect users. We aim to understand how to design more robust human-AI systems so that these initial AI errors do not lead to more catastrophic failures. To prevent failures, it is essential to detect errors as early as possible and have clear mechanisms to repair them. However, detecting errors in AI systems can be challenging. Therefore, we examine various approaches to error detection and repair, including post-hoc estimation, the use of traces and ambiguity, and multiple sensor layers.</abstract><type>Conference Paper/Proceeding/Abstract</type><journal>CHItaly '23: Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter</journal><volume/><journalNumber/><paginationStart>1</paginationStart><paginationEnd>11</paginationEnd><publisher>ACM</publisher><placeOfPublication>New York, NY, USA</placeOfPublication><isbnPrint>979-8-4007-0806-0</isbnPrint><isbnElectronic/><issnPrint/><issnElectronic/><keywords>HCI, AI, errors, failures, error detection, error repair, user perception, interaction design</keywords><publishedDay>20</publishedDay><publishedMonth>9</publishedMonth><publishedYear>2023</publishedYear><publishedDate>2023-09-20</publishedDate><doi>10.1145/3605390.3605414</doi><url>http://dx.doi.org/10.1145/3605390.3605414</url><notes/><college>COLLEGE NANME</college><department>Computer Science</department><CollegeCode>COLLEGE CODE</CollegeCode><DepartmentCode>SCS</DepartmentCode><institution>Swansea University</institution><apcterm/><funders/><projectreference/><lastEdited>2023-11-28T17:29:42.9818461</lastEdited><Created>2023-10-20T08:58:46.5444996</Created><path><level id="1">Faculty of Science and Engineering</level><level id="2">School of Mathematics and Computer Science - Computer Science</level></path><authors><author><firstname>Alba</firstname><surname>Bisante</surname><orcid>0000-0002-5996-4221</orcid><order>1</order></author><author><firstname>Alan</firstname><surname>Dix</surname><orcid>0000-0002-5242-7693</orcid><order>2</order></author><author><firstname>Emanuele</firstname><surname>Panizzi</surname><orcid>0000-0002-7442-8451</orcid><order>3</order></author><author><firstname>Stefano</firstname><surname>Zeppieri</surname><orcid>0000-0001-8392-2251</orcid><order>4</order></author></authors><documents/><OutputDurs/></rfc1807>
spelling v2 64783 2023-10-20 To err is AI e31e47c578b2a6a39949aa7f149f4cf9 0000-0002-5242-7693 Alan Dix Alan Dix true false 2023-10-20 SCS In this work, we analyze the different contexts in which one chooses to integrate artificial intelligence into an interface and the implications of this choice in managing user interaction. While AI in systems can provide significant benefits, it is not infallible and can make errors that seriously affect users. We aim to understand how to design more robust human-AI systems so that these initial AI errors do not lead to more catastrophic failures. To prevent failures, it is essential to detect errors as early as possible and have clear mechanisms to repair them. However, detecting errors in AI systems can be challenging. Therefore, we examine various approaches to error detection and repair, including post-hoc estimation, the use of traces and ambiguity, and multiple sensor layers. Conference Paper/Proceeding/Abstract CHItaly '23: Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter 1 11 ACM New York, NY, USA 979-8-4007-0806-0 HCI, AI, errors, failures, error detection, error repair, user perception, interaction design 20 9 2023 2023-09-20 10.1145/3605390.3605414 http://dx.doi.org/10.1145/3605390.3605414 COLLEGE NANME Computer Science COLLEGE CODE SCS Swansea University 2023-11-28T17:29:42.9818461 2023-10-20T08:58:46.5444996 Faculty of Science and Engineering School of Mathematics and Computer Science - Computer Science Alba Bisante 0000-0002-5996-4221 1 Alan Dix 0000-0002-5242-7693 2 Emanuele Panizzi 0000-0002-7442-8451 3 Stefano Zeppieri 0000-0001-8392-2251 4
title To err is AI
spellingShingle To err is AI
Alan Dix
title_short To err is AI
title_full To err is AI
title_fullStr To err is AI
title_full_unstemmed To err is AI
title_sort To err is AI
author_id_str_mv e31e47c578b2a6a39949aa7f149f4cf9
author_id_fullname_str_mv e31e47c578b2a6a39949aa7f149f4cf9_***_Alan Dix
author Alan Dix
author2 Alba Bisante
Alan Dix
Emanuele Panizzi
Stefano Zeppieri
format Conference Paper/Proceeding/Abstract
container_title CHItaly '23: Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter
container_start_page 1
publishDate 2023
institution Swansea University
isbn 979-8-4007-0806-0
doi_str_mv 10.1145/3605390.3605414
publisher ACM
college_str Faculty of Science and Engineering
hierarchytype
hierarchy_top_id facultyofscienceandengineering
hierarchy_top_title Faculty of Science and Engineering
hierarchy_parent_id facultyofscienceandengineering
hierarchy_parent_title Faculty of Science and Engineering
department_str School of Mathematics and Computer Science - Computer Science{{{_:::_}}}Faculty of Science and Engineering{{{_:::_}}}School of Mathematics and Computer Science - Computer Science
url http://dx.doi.org/10.1145/3605390.3605414
document_store_str 0
active_str 0
description In this work, we analyze the different contexts in which one chooses to integrate artificial intelligence into an interface and the implications of this choice in managing user interaction. While AI in systems can provide significant benefits, it is not infallible and can make errors that seriously affect users. We aim to understand how to design more robust human-AI systems so that these initial AI errors do not lead to more catastrophic failures. To prevent failures, it is essential to detect errors as early as possible and have clear mechanisms to repair them. However, detecting errors in AI systems can be challenging. Therefore, we examine various approaches to error detection and repair, including post-hoc estimation, the use of traces and ambiguity, and multiple sensor layers.
published_date 2023-09-20T17:29:43Z
_version_ 1783829714598625280
score 11.012678