No Cover Image

E-Thesis 125 views 36 downloads

Explainability and Uncertainty Quantification in Networks and Social Systems / SOPHIE SADLER

Swansea University Author: SOPHIE SADLER

  • 2024_Sadler_S.final.67755.pdf

    PDF | E-Thesis – open access

    Copyright: The Author, Sophie Sadler, 2024 CC BY - Distributed under the terms of a Creative Commons Attribution 4.0 License (CC BY 4.0).

    Download (12.86MB)

DOI (Published version): 10.23889/SUThesis.67755

Abstract

The complexity of deep learning models has motivated the development of explainability approaches within the field of artificial intelligence. However, there are several adjacent fieldsto deep learning where similarly complex models are used to make decisions, which can alsobenefit from improved int...

Full description

Published: Swansea University, Wales, UK 2024
Institution: Swansea University
Degree level: Doctoral
Degree name: Ph.D
Supervisor: Archambault, D
URI: https://cronfa.swan.ac.uk/Record/cronfa67755
Tags: Add Tag
No Tags, Be the first to tag this record!
first_indexed 2024-09-20T12:15:37Z
last_indexed 2024-09-20T12:15:37Z
id cronfa67755
recordtype RisThesis
fullrecord <?xml version="1.0" encoding="utf-8"?><rfc1807 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><bib-version>v2</bib-version><id>67755</id><entry>2024-09-20</entry><title>Explainability and Uncertainty Quantification in Networks and Social Systems</title><swanseaauthors><author><sid>c739e4cbce9f22e09dca0969968e2da4</sid><firstname>SOPHIE</firstname><surname>SADLER</surname><name>SOPHIE SADLER</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2024-09-20</date><abstract>The complexity of deep learning models has motivated the development of explainability approaches within the field of artificial intelligence. However, there are several adjacent fieldsto deep learning where similarly complex models are used to make decisions, which can alsobenefit from improved interpretability. In this thesis, we therefore focus on the application ofexisting explainability approaches, including the use of visualisation, to problems outside thetraditional scope of deep learning. In particular, our focus is on the fields of social networkanalysis and optimisation. In addition to the use of explainability approaches, we also explorehow uncertainty quantification can be used to improve the trustworthiness of decision-makingwithin social network applications.In the first two chapters of this thesis, we propose a methodology to apply feature importance scoring to the community detection problem in network analysis, where common approaches typically provide outputs with little explanation. We propose a longlist of features on several levels (individual nodes, pairs of nodes, and sets of nodes) which we believe are interpretable to network analysis experts, and explore which of these can be used to understand the outputs of the algorithms.We then apply existing uncertainty quantification approaches to a new prediction problemwhich arises in large online social networks, where we analyse how these approaches performin the face of the unusual data distributions that we see in this setting. In particular, we areinterested in the engagement that online content receives.Finally, we propose a novel visualisation approach to aid understanding in fitness landscape analysis. We perform dimensionality reduction on the locations of points in the landscape, including the optima, before representing these with a network structure which encodes additional information about the landscape. This chapter focuses on optimisation as another domain beyond network analysis which can benefit from explainability.</abstract><type>E-Thesis</type><journal/><volume/><journalNumber/><paginationStart/><paginationEnd/><publisher/><placeOfPublication>Swansea University, Wales, UK</placeOfPublication><isbnPrint/><isbnElectronic/><issnPrint/><issnElectronic/><keywords>explainability, social network analysis, optimisation, uncertainty quantification, machine learning</keywords><publishedDay>18</publishedDay><publishedMonth>8</publishedMonth><publishedYear>2024</publishedYear><publishedDate>2024-08-18</publishedDate><doi>10.23889/SUThesis.67755</doi><url/><notes>A selection of content is redacted or is partially redacted from this thesis to protect sensitive and personal information.</notes><college>COLLEGE NANME</college><CollegeCode>COLLEGE CODE</CollegeCode><institution>Swansea University</institution><supervisor>Archambault, D</supervisor><degreelevel>Doctoral</degreelevel><degreename>Ph.D</degreename><degreesponsorsfunders>UKRI AIMLAC CDT</degreesponsorsfunders><apcterm/><funders>UKRI AIMLAC CDT</funders><projectreference/><lastEdited>2024-09-20T13:20:47.8203446</lastEdited><Created>2024-09-20T12:59:35.5517877</Created><path><level id="1">Faculty of Science and Engineering</level><level id="2">School of Mathematics and Computer Science - Computer Science</level></path><authors><author><firstname>SOPHIE</firstname><surname>SADLER</surname><order>1</order></author></authors><documents><document><filename>67755__31412__edcd449326d14a5eaa234bfda05f6082.pdf</filename><originalFilename>2024_Sadler_S.final.67755.pdf</originalFilename><uploaded>2024-09-20T13:14:29.9405742</uploaded><type>Output</type><contentLength>13484426</contentLength><contentType>application/pdf</contentType><version>E-Thesis – open access</version><cronfaStatus>true</cronfaStatus><documentNotes>Copyright: The Author, Sophie Sadler, 2024 CC BY - Distributed under the terms of a Creative Commons Attribution 4.0 License (CC BY 4.0).</documentNotes><copyrightCorrect>true</copyrightCorrect><language>eng</language><licence>https://creativecommons.org/licenses/by/4.0/</licence></document></documents><OutputDurs/></rfc1807>
spelling v2 67755 2024-09-20 Explainability and Uncertainty Quantification in Networks and Social Systems c739e4cbce9f22e09dca0969968e2da4 SOPHIE SADLER SOPHIE SADLER true false 2024-09-20 The complexity of deep learning models has motivated the development of explainability approaches within the field of artificial intelligence. However, there are several adjacent fieldsto deep learning where similarly complex models are used to make decisions, which can alsobenefit from improved interpretability. In this thesis, we therefore focus on the application ofexisting explainability approaches, including the use of visualisation, to problems outside thetraditional scope of deep learning. In particular, our focus is on the fields of social networkanalysis and optimisation. In addition to the use of explainability approaches, we also explorehow uncertainty quantification can be used to improve the trustworthiness of decision-makingwithin social network applications.In the first two chapters of this thesis, we propose a methodology to apply feature importance scoring to the community detection problem in network analysis, where common approaches typically provide outputs with little explanation. We propose a longlist of features on several levels (individual nodes, pairs of nodes, and sets of nodes) which we believe are interpretable to network analysis experts, and explore which of these can be used to understand the outputs of the algorithms.We then apply existing uncertainty quantification approaches to a new prediction problemwhich arises in large online social networks, where we analyse how these approaches performin the face of the unusual data distributions that we see in this setting. In particular, we areinterested in the engagement that online content receives.Finally, we propose a novel visualisation approach to aid understanding in fitness landscape analysis. We perform dimensionality reduction on the locations of points in the landscape, including the optima, before representing these with a network structure which encodes additional information about the landscape. This chapter focuses on optimisation as another domain beyond network analysis which can benefit from explainability. E-Thesis Swansea University, Wales, UK explainability, social network analysis, optimisation, uncertainty quantification, machine learning 18 8 2024 2024-08-18 10.23889/SUThesis.67755 A selection of content is redacted or is partially redacted from this thesis to protect sensitive and personal information. COLLEGE NANME COLLEGE CODE Swansea University Archambault, D Doctoral Ph.D UKRI AIMLAC CDT UKRI AIMLAC CDT 2024-09-20T13:20:47.8203446 2024-09-20T12:59:35.5517877 Faculty of Science and Engineering School of Mathematics and Computer Science - Computer Science SOPHIE SADLER 1 67755__31412__edcd449326d14a5eaa234bfda05f6082.pdf 2024_Sadler_S.final.67755.pdf 2024-09-20T13:14:29.9405742 Output 13484426 application/pdf E-Thesis – open access true Copyright: The Author, Sophie Sadler, 2024 CC BY - Distributed under the terms of a Creative Commons Attribution 4.0 License (CC BY 4.0). true eng https://creativecommons.org/licenses/by/4.0/
title Explainability and Uncertainty Quantification in Networks and Social Systems
spellingShingle Explainability and Uncertainty Quantification in Networks and Social Systems
SOPHIE SADLER
title_short Explainability and Uncertainty Quantification in Networks and Social Systems
title_full Explainability and Uncertainty Quantification in Networks and Social Systems
title_fullStr Explainability and Uncertainty Quantification in Networks and Social Systems
title_full_unstemmed Explainability and Uncertainty Quantification in Networks and Social Systems
title_sort Explainability and Uncertainty Quantification in Networks and Social Systems
author_id_str_mv c739e4cbce9f22e09dca0969968e2da4
author_id_fullname_str_mv c739e4cbce9f22e09dca0969968e2da4_***_SOPHIE SADLER
author SOPHIE SADLER
author2 SOPHIE SADLER
format E-Thesis
publishDate 2024
institution Swansea University
doi_str_mv 10.23889/SUThesis.67755
college_str Faculty of Science and Engineering
hierarchytype
hierarchy_top_id facultyofscienceandengineering
hierarchy_top_title Faculty of Science and Engineering
hierarchy_parent_id facultyofscienceandengineering
hierarchy_parent_title Faculty of Science and Engineering
department_str School of Mathematics and Computer Science - Computer Science{{{_:::_}}}Faculty of Science and Engineering{{{_:::_}}}School of Mathematics and Computer Science - Computer Science
document_store_str 1
active_str 0
description The complexity of deep learning models has motivated the development of explainability approaches within the field of artificial intelligence. However, there are several adjacent fieldsto deep learning where similarly complex models are used to make decisions, which can alsobenefit from improved interpretability. In this thesis, we therefore focus on the application ofexisting explainability approaches, including the use of visualisation, to problems outside thetraditional scope of deep learning. In particular, our focus is on the fields of social networkanalysis and optimisation. In addition to the use of explainability approaches, we also explorehow uncertainty quantification can be used to improve the trustworthiness of decision-makingwithin social network applications.In the first two chapters of this thesis, we propose a methodology to apply feature importance scoring to the community detection problem in network analysis, where common approaches typically provide outputs with little explanation. We propose a longlist of features on several levels (individual nodes, pairs of nodes, and sets of nodes) which we believe are interpretable to network analysis experts, and explore which of these can be used to understand the outputs of the algorithms.We then apply existing uncertainty quantification approaches to a new prediction problemwhich arises in large online social networks, where we analyse how these approaches performin the face of the unusual data distributions that we see in this setting. In particular, we areinterested in the engagement that online content receives.Finally, we propose a novel visualisation approach to aid understanding in fitness landscape analysis. We perform dimensionality reduction on the locations of points in the landscape, including the optima, before representing these with a network structure which encodes additional information about the landscape. This chapter focuses on optimisation as another domain beyond network analysis which can benefit from explainability.
published_date 2024-08-18T13:20:47Z
_version_ 1810717576691449856
score 11.036116