E-Thesis 402 views 144 downloads
Explainable Artificial Intelligence for Medical Science / JAMIE DUELL
Swansea University Author: JAMIE DUELL
DOI (Published version): 10.23889/SUthesis.66607
Abstract
Explainable Artificial Intelligence (XAI) is at the forefront of Artificial Intelligence (AI) research. As the development of AI has become increasingly complex with modern day computational capabilities, the transparency of the AI models decreases. This promotes the necessity of XAI, as it is illic...
Published: |
Swansea University, Wales, UK
2024
|
---|---|
Institution: | Swansea University |
Degree level: | Doctoral |
Degree name: | Ph.D |
Supervisor: | Seisenberger, M.; & Aarts, G |
URI: | https://cronfa.swan.ac.uk/Record/cronfa66607 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
first_indexed |
2024-06-06T14:46:40Z |
---|---|
last_indexed |
2024-06-06T14:46:40Z |
id |
cronfa66607 |
recordtype |
RisThesis |
fullrecord |
<?xml version="1.0" encoding="utf-8"?><rfc1807 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><bib-version>v2</bib-version><id>66607</id><entry>2024-06-06</entry><title>Explainable Artificial Intelligence for Medical Science</title><swanseaauthors><author><sid>d47a5e7209f1dc4a42293528db4a2cfd</sid><firstname>JAMIE</firstname><surname>DUELL</surname><name>JAMIE DUELL</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2024-06-06</date><abstract>Explainable Artificial Intelligence (XAI) is at the forefront of Artificial Intelligence (AI) research. As the development of AI has become increasingly complex with modern day computational capabilities, the transparency of the AI models decreases. This promotes the necessity of XAI, as it is illicit as per the General Data Protection Regulations (GDPR) “right to an explanation” to not provide a person with an explanation given a decision reached after algorithmic judgement. The latter is crucial in critical fields such as Healthcare, Finance and Law. For this thesis, the Healthcare field and morespecifically Electronic Health Records are the main focus for the development and application of XAI methods.This thesis offers prospective approaches to enhance the explainability of Electronic Health Records (EHRs). It presents three different perspectives that encompass the Model, Data, and the User, aimed at elevating explainability. The model perspective draws upon improvements to the local explainability of black-box AI methods. The data perspective enables an improvement to the quality of the data provided for AI methods, such that the XAI methods applied to the AI models account for a key property of missingness. Finally, the user perspective provides an accessible form of explainability by allowing less experienced users to have an interface to use both AI and XAI methods.Thereby, this thesis provides new innovative approaches to improve the explanations that are given for EHRs. This is verified through empirical and theoretical analysis of a collection of introduced and existing methods. We propose a selection of XAI methods that collectively build upon current leading literature in the field. Here we propose the methods Polynomial Adaptive Local Explanations (PALE) for patient specific explanations, both Counterfactual-Integrated Gradients (CF-IG) and QuantifiedUncertainty Counterfactual Explanations (QUCE) that utilise counterfactual thinking, Batch-Integrated Gradients (Batch-IG) to address the temporal nature of EHR data and Surrogate Set Imputation (SSI) that addresses missing value imputation. Finally, we propose a tool called ExMed that utilises XAI methods and allows for the ease of access for AI and XAI methods.</abstract><type>E-Thesis</type><journal/><volume/><journalNumber/><paginationStart/><paginationEnd/><publisher/><placeOfPublication>Swansea University, Wales, UK</placeOfPublication><isbnPrint/><isbnElectronic/><issnPrint/><issnElectronic/><keywords>Explainability, Interpretability, Artificial Intelligence, Machine Learning, Deep Learning</keywords><publishedDay>9</publishedDay><publishedMonth>4</publishedMonth><publishedYear>2024</publishedYear><publishedDate>2024-04-09</publishedDate><doi>10.23889/SUthesis.66607</doi><url/><notes>A selection of content is redacted or is partially redacted from this thesis to protect sensitive and personal information</notes><college>COLLEGE NANME</college><CollegeCode>COLLEGE CODE</CollegeCode><institution>Swansea University</institution><supervisor>Seisenberger, M.; & Aarts, G</supervisor><degreelevel>Doctoral</degreelevel><degreename>Ph.D</degreename><degreesponsorsfunders>EPSRC Doctoral Training Grant</degreesponsorsfunders><apcterm/><funders>EPSRC Doctoral Training Grant</funders><projectreference/><lastEdited>2024-06-06T15:56:13.6978143</lastEdited><Created>2024-06-06T15:32:28.9889687</Created><path><level id="1">Faculty of Science and Engineering</level><level id="2">School of Mathematics and Computer Science - Computer Science</level></path><authors><author><firstname>JAMIE</firstname><surname>DUELL</surname><order>1</order></author></authors><documents><document><filename>66607__30556__72bea4f95b0f46b198f697782841e891.pdf</filename><originalFilename>2024_Duell_J.final.66607.pdf</originalFilename><uploaded>2024-06-06T15:45:58.0090534</uploaded><type>Output</type><contentLength>5778630</contentLength><contentType>application/pdf</contentType><version>E-Thesis – open access</version><cronfaStatus>true</cronfaStatus><documentNotes>Copyright: The Author, Jamie Duell, 2023</documentNotes><copyrightCorrect>true</copyrightCorrect><language>eng</language></document></documents><OutputDurs/></rfc1807> |
spelling |
v2 66607 2024-06-06 Explainable Artificial Intelligence for Medical Science d47a5e7209f1dc4a42293528db4a2cfd JAMIE DUELL JAMIE DUELL true false 2024-06-06 Explainable Artificial Intelligence (XAI) is at the forefront of Artificial Intelligence (AI) research. As the development of AI has become increasingly complex with modern day computational capabilities, the transparency of the AI models decreases. This promotes the necessity of XAI, as it is illicit as per the General Data Protection Regulations (GDPR) “right to an explanation” to not provide a person with an explanation given a decision reached after algorithmic judgement. The latter is crucial in critical fields such as Healthcare, Finance and Law. For this thesis, the Healthcare field and morespecifically Electronic Health Records are the main focus for the development and application of XAI methods.This thesis offers prospective approaches to enhance the explainability of Electronic Health Records (EHRs). It presents three different perspectives that encompass the Model, Data, and the User, aimed at elevating explainability. The model perspective draws upon improvements to the local explainability of black-box AI methods. The data perspective enables an improvement to the quality of the data provided for AI methods, such that the XAI methods applied to the AI models account for a key property of missingness. Finally, the user perspective provides an accessible form of explainability by allowing less experienced users to have an interface to use both AI and XAI methods.Thereby, this thesis provides new innovative approaches to improve the explanations that are given for EHRs. This is verified through empirical and theoretical analysis of a collection of introduced and existing methods. We propose a selection of XAI methods that collectively build upon current leading literature in the field. Here we propose the methods Polynomial Adaptive Local Explanations (PALE) for patient specific explanations, both Counterfactual-Integrated Gradients (CF-IG) and QuantifiedUncertainty Counterfactual Explanations (QUCE) that utilise counterfactual thinking, Batch-Integrated Gradients (Batch-IG) to address the temporal nature of EHR data and Surrogate Set Imputation (SSI) that addresses missing value imputation. Finally, we propose a tool called ExMed that utilises XAI methods and allows for the ease of access for AI and XAI methods. E-Thesis Swansea University, Wales, UK Explainability, Interpretability, Artificial Intelligence, Machine Learning, Deep Learning 9 4 2024 2024-04-09 10.23889/SUthesis.66607 A selection of content is redacted or is partially redacted from this thesis to protect sensitive and personal information COLLEGE NANME COLLEGE CODE Swansea University Seisenberger, M.; & Aarts, G Doctoral Ph.D EPSRC Doctoral Training Grant EPSRC Doctoral Training Grant 2024-06-06T15:56:13.6978143 2024-06-06T15:32:28.9889687 Faculty of Science and Engineering School of Mathematics and Computer Science - Computer Science JAMIE DUELL 1 66607__30556__72bea4f95b0f46b198f697782841e891.pdf 2024_Duell_J.final.66607.pdf 2024-06-06T15:45:58.0090534 Output 5778630 application/pdf E-Thesis – open access true Copyright: The Author, Jamie Duell, 2023 true eng |
title |
Explainable Artificial Intelligence for Medical Science |
spellingShingle |
Explainable Artificial Intelligence for Medical Science JAMIE DUELL |
title_short |
Explainable Artificial Intelligence for Medical Science |
title_full |
Explainable Artificial Intelligence for Medical Science |
title_fullStr |
Explainable Artificial Intelligence for Medical Science |
title_full_unstemmed |
Explainable Artificial Intelligence for Medical Science |
title_sort |
Explainable Artificial Intelligence for Medical Science |
author_id_str_mv |
d47a5e7209f1dc4a42293528db4a2cfd |
author_id_fullname_str_mv |
d47a5e7209f1dc4a42293528db4a2cfd_***_JAMIE DUELL |
author |
JAMIE DUELL |
author2 |
JAMIE DUELL |
format |
E-Thesis |
publishDate |
2024 |
institution |
Swansea University |
doi_str_mv |
10.23889/SUthesis.66607 |
college_str |
Faculty of Science and Engineering |
hierarchytype |
|
hierarchy_top_id |
facultyofscienceandengineering |
hierarchy_top_title |
Faculty of Science and Engineering |
hierarchy_parent_id |
facultyofscienceandengineering |
hierarchy_parent_title |
Faculty of Science and Engineering |
department_str |
School of Mathematics and Computer Science - Computer Science{{{_:::_}}}Faculty of Science and Engineering{{{_:::_}}}School of Mathematics and Computer Science - Computer Science |
document_store_str |
1 |
active_str |
0 |
description |
Explainable Artificial Intelligence (XAI) is at the forefront of Artificial Intelligence (AI) research. As the development of AI has become increasingly complex with modern day computational capabilities, the transparency of the AI models decreases. This promotes the necessity of XAI, as it is illicit as per the General Data Protection Regulations (GDPR) “right to an explanation” to not provide a person with an explanation given a decision reached after algorithmic judgement. The latter is crucial in critical fields such as Healthcare, Finance and Law. For this thesis, the Healthcare field and morespecifically Electronic Health Records are the main focus for the development and application of XAI methods.This thesis offers prospective approaches to enhance the explainability of Electronic Health Records (EHRs). It presents three different perspectives that encompass the Model, Data, and the User, aimed at elevating explainability. The model perspective draws upon improvements to the local explainability of black-box AI methods. The data perspective enables an improvement to the quality of the data provided for AI methods, such that the XAI methods applied to the AI models account for a key property of missingness. Finally, the user perspective provides an accessible form of explainability by allowing less experienced users to have an interface to use both AI and XAI methods.Thereby, this thesis provides new innovative approaches to improve the explanations that are given for EHRs. This is verified through empirical and theoretical analysis of a collection of introduced and existing methods. We propose a selection of XAI methods that collectively build upon current leading literature in the field. Here we propose the methods Polynomial Adaptive Local Explanations (PALE) for patient specific explanations, both Counterfactual-Integrated Gradients (CF-IG) and QuantifiedUncertainty Counterfactual Explanations (QUCE) that utilise counterfactual thinking, Batch-Integrated Gradients (Batch-IG) to address the temporal nature of EHR data and Surrogate Set Imputation (SSI) that addresses missing value imputation. Finally, we propose a tool called ExMed that utilises XAI methods and allows for the ease of access for AI and XAI methods. |
published_date |
2024-04-09T15:56:13Z |
_version_ |
1801124077493026816 |
score |
11.035655 |