E-Thesis 501 views 105 downloads
Explainable Artificial Intelligence across Domains: Refinement of SHAP and Practical Applications / VEERA KOVVURI
Swansea University Author: VEERA KOVVURI
-
PDF | E-Thesis – open access
Copyright: The Author, Veera Raghava Reddy Kovvuri, 2024 Distributed under the terms of a Creative Commons Attribution 4.0 License (CC BY 4.0).
Download (5.71MB)
DOI (Published version): 10.23889/SUThesis.67149
Abstract
Explainable Artificial Intelligence (XAI) has become a crucial area within AI, emphasizing the transparency and interpretability of complex models. In this context, this research meticulously examines diverse datasets from medical, financial, and socio-economic domains, applying existing XAI technique...
Published: |
Swansea University, Wales, UK
2024
|
---|---|
Institution: | Swansea University |
Degree level: | Doctoral |
Degree name: | Ph.D |
Supervisor: | Seisenberger, M. |
URI: | https://cronfa.swan.ac.uk/Record/cronfa67149 |
first_indexed |
2024-07-18T11:17:03Z |
---|---|
last_indexed |
2024-11-25T14:19:35Z |
id |
cronfa67149 |
recordtype |
RisThesis |
fullrecord |
<?xml version="1.0"?><rfc1807><datestamp>2024-07-18T12:21:30.2492647</datestamp><bib-version>v2</bib-version><id>67149</id><entry>2024-07-18</entry><title>Explainable Artificial Intelligence across Domains: Refinement of SHAP and Practical Applications</title><swanseaauthors><author><sid>a161156071508ffba3fa5b077116d4e7</sid><firstname>VEERA</firstname><surname>KOVVURI</surname><name>VEERA KOVVURI</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2024-07-18</date><abstract>Explainable Artificial Intelligence (XAI) has become a crucial area within AI, emphasizing the transparency and interpretability of complex models. In this context, this research meticulously examines diverse datasets from medical, financial, and socio-economic domains, applying existing XAI techniques to enhance understanding and clarity of the results. This work makes a notable contribution to XAI by introducing the Controllable fActor Feature Attribution (CAFA) approach, a novel method that categorizes dataset features into ‘controllable’ and ‘uncontrollable’ groups. This categorization enables a more nuanced and actionable analysis of feature importance.Furthermore, the research proposes an extension to CAFA, the Uncertainty-based Controllable fActor Feature Attribution (UCAFA) method, which incorporates a Variational Autoencoder (VAE) to ensure that perturbations remain within the expected data distribution, thereby enhancing the reliability of feature attributions. The effectiveness and versatility of CAFA are showcased through its application in two distinct domains:Medical and socio-economic. In the medical domain, a case study is conducted on the efficacy of COVID-19 non-pharmaceutical control measures, providing valuable insights into the impact and effectiveness of different strategies employed to control the pandemic. Additionally, UCAFA is applied to the medical domain, demonstrating its ability to improve the reliability of feature attributions by considering uncertainty. The socio-economic domain is investigated by applying CAFA to several datasets, yielding insights into income prediction, credit risk assessment, and recidivism prediction. In the financial domain, the analysis focuses on global equity funds using established XAI methodologies, particularly the integration of the XGBoost model with Shapley values. This analysis provides critical insights into fund performance and diversification strategies across G10 countries. This thesis highlights the potential of CAFA and UCAFA as promising directions in the domain of XAI, setting the stage for advanced research and applications.</abstract><type>E-Thesis</type><journal/><volume/><journalNumber/><paginationStart/><paginationEnd/><publisher/><placeOfPublication>Swansea University, Wales, UK</placeOfPublication><isbnPrint/><isbnElectronic/><issnPrint/><issnElectronic/><keywords>Explainable Artificial Intelligence, Artificial Intelligence, Machine Learning</keywords><publishedDay>26</publishedDay><publishedMonth>6</publishedMonth><publishedYear>2024</publishedYear><publishedDate>2024-06-26</publishedDate><doi>10.23889/SUThesis.67149</doi><url/><notes>A selection of content is redacted or is partially redacted from this thesis to protect sensitive and personal information.</notes><college>COLLEGE NANME</college><CollegeCode>COLLEGE CODE</CollegeCode><institution>Swansea University</institution><supervisor>Seisenberger, M.</supervisor><degreelevel>Doctoral</degreelevel><degreename>Ph.D</degreename><apcterm/><funders/><projectreference/><lastEdited>2024-07-18T12:21:30.2492647</lastEdited><Created>2024-07-18T12:09:03.5827768</Created><path><level id="1">Faculty of Science and Engineering</level><level id="2">School of Mathematics and Computer Science - Computer Science</level></path><authors><author><firstname>VEERA</firstname><surname>KOVVURI</surname><order>1</order></author></authors><documents><document><filename>67149__30926__7169ec43494a438986fc0ce26b392007.pdf</filename><originalFilename>2024_Kovvuri_R.final.67149.pdf</originalFilename><uploaded>2024-07-18T12:15:15.5518717</uploaded><type>Output</type><contentLength>5987526</contentLength><contentType>application/pdf</contentType><version>E-Thesis – open access</version><cronfaStatus>true</cronfaStatus><documentNotes>Copyright: The Author, Veera Raghava Reddy Kovvuri, 2024
Distributed under the terms of a Creative Commons Attribution 4.0 License (CC BY 4.0).</documentNotes><copyrightCorrect>true</copyrightCorrect><language>eng</language><licence>https://creativecommons.org/licenses/by/4.0/</licence></document></documents><OutputDurs/></rfc1807> |
spelling |
2024-07-18T12:21:30.2492647 v2 67149 2024-07-18 Explainable Artificial Intelligence across Domains: Refinement of SHAP and Practical Applications a161156071508ffba3fa5b077116d4e7 VEERA KOVVURI VEERA KOVVURI true false 2024-07-18 Explainable Artificial Intelligence (XAI) has become a crucial area within AI, emphasizing the transparency and interpretability of complex models. In this context, this research meticulously examines diverse datasets from medical, financial, and socio-economic domains, applying existing XAI techniques to enhance understanding and clarity of the results. This work makes a notable contribution to XAI by introducing the Controllable fActor Feature Attribution (CAFA) approach, a novel method that categorizes dataset features into ‘controllable’ and ‘uncontrollable’ groups. This categorization enables a more nuanced and actionable analysis of feature importance.Furthermore, the research proposes an extension to CAFA, the Uncertainty-based Controllable fActor Feature Attribution (UCAFA) method, which incorporates a Variational Autoencoder (VAE) to ensure that perturbations remain within the expected data distribution, thereby enhancing the reliability of feature attributions. The effectiveness and versatility of CAFA are showcased through its application in two distinct domains:Medical and socio-economic. In the medical domain, a case study is conducted on the efficacy of COVID-19 non-pharmaceutical control measures, providing valuable insights into the impact and effectiveness of different strategies employed to control the pandemic. Additionally, UCAFA is applied to the medical domain, demonstrating its ability to improve the reliability of feature attributions by considering uncertainty. The socio-economic domain is investigated by applying CAFA to several datasets, yielding insights into income prediction, credit risk assessment, and recidivism prediction. In the financial domain, the analysis focuses on global equity funds using established XAI methodologies, particularly the integration of the XGBoost model with Shapley values. This analysis provides critical insights into fund performance and diversification strategies across G10 countries. This thesis highlights the potential of CAFA and UCAFA as promising directions in the domain of XAI, setting the stage for advanced research and applications. E-Thesis Swansea University, Wales, UK Explainable Artificial Intelligence, Artificial Intelligence, Machine Learning 26 6 2024 2024-06-26 10.23889/SUThesis.67149 A selection of content is redacted or is partially redacted from this thesis to protect sensitive and personal information. COLLEGE NANME COLLEGE CODE Swansea University Seisenberger, M. Doctoral Ph.D 2024-07-18T12:21:30.2492647 2024-07-18T12:09:03.5827768 Faculty of Science and Engineering School of Mathematics and Computer Science - Computer Science VEERA KOVVURI 1 67149__30926__7169ec43494a438986fc0ce26b392007.pdf 2024_Kovvuri_R.final.67149.pdf 2024-07-18T12:15:15.5518717 Output 5987526 application/pdf E-Thesis – open access true Copyright: The Author, Veera Raghava Reddy Kovvuri, 2024 Distributed under the terms of a Creative Commons Attribution 4.0 License (CC BY 4.0). true eng https://creativecommons.org/licenses/by/4.0/ |
title |
Explainable Artificial Intelligence across Domains: Refinement of SHAP and Practical Applications |
spellingShingle |
Explainable Artificial Intelligence across Domains: Refinement of SHAP and Practical Applications VEERA KOVVURI |
title_short |
Explainable Artificial Intelligence across Domains: Refinement of SHAP and Practical Applications |
title_full |
Explainable Artificial Intelligence across Domains: Refinement of SHAP and Practical Applications |
title_fullStr |
Explainable Artificial Intelligence across Domains: Refinement of SHAP and Practical Applications |
title_full_unstemmed |
Explainable Artificial Intelligence across Domains: Refinement of SHAP and Practical Applications |
title_sort |
Explainable Artificial Intelligence across Domains: Refinement of SHAP and Practical Applications |
author_id_str_mv |
a161156071508ffba3fa5b077116d4e7 |
author_id_fullname_str_mv |
a161156071508ffba3fa5b077116d4e7_***_VEERA KOVVURI |
author |
VEERA KOVVURI |
author2 |
VEERA KOVVURI |
format |
E-Thesis |
publishDate |
2024 |
institution |
Swansea University |
doi_str_mv |
10.23889/SUThesis.67149 |
college_str |
Faculty of Science and Engineering |
hierarchytype |
|
hierarchy_top_id |
facultyofscienceandengineering |
hierarchy_top_title |
Faculty of Science and Engineering |
hierarchy_parent_id |
facultyofscienceandengineering |
hierarchy_parent_title |
Faculty of Science and Engineering |
department_str |
School of Mathematics and Computer Science - Computer Science{{{_:::_}}}Faculty of Science and Engineering{{{_:::_}}}School of Mathematics and Computer Science - Computer Science |
document_store_str |
1 |
active_str |
0 |
description |
Explainable Artificial Intelligence (XAI) has become a crucial area within AI, emphasizing the transparency and interpretability of complex models. In this context, this research meticulously examines diverse datasets from medical, financial, and socio-economic domains, applying existing XAI techniques to enhance understanding and clarity of the results. This work makes a notable contribution to XAI by introducing the Controllable fActor Feature Attribution (CAFA) approach, a novel method that categorizes dataset features into ‘controllable’ and ‘uncontrollable’ groups. This categorization enables a more nuanced and actionable analysis of feature importance.Furthermore, the research proposes an extension to CAFA, the Uncertainty-based Controllable fActor Feature Attribution (UCAFA) method, which incorporates a Variational Autoencoder (VAE) to ensure that perturbations remain within the expected data distribution, thereby enhancing the reliability of feature attributions. The effectiveness and versatility of CAFA are showcased through its application in two distinct domains:Medical and socio-economic. In the medical domain, a case study is conducted on the efficacy of COVID-19 non-pharmaceutical control measures, providing valuable insights into the impact and effectiveness of different strategies employed to control the pandemic. Additionally, UCAFA is applied to the medical domain, demonstrating its ability to improve the reliability of feature attributions by considering uncertainty. The socio-economic domain is investigated by applying CAFA to several datasets, yielding insights into income prediction, credit risk assessment, and recidivism prediction. In the financial domain, the analysis focuses on global equity funds using established XAI methodologies, particularly the integration of the XGBoost model with Shapley values. This analysis provides critical insights into fund performance and diversification strategies across G10 countries. This thesis highlights the potential of CAFA and UCAFA as promising directions in the domain of XAI, setting the stage for advanced research and applications. |
published_date |
2024-06-26T05:49:13Z |
_version_ |
1822017563089960960 |
score |
11.293348 |