No Cover Image

E-Thesis 125 views 36 downloads

Explainability and Uncertainty Quantification in Networks and Social Systems / SOPHIE SADLER

Swansea University Author: SOPHIE SADLER

  • 2024_Sadler_S.final.67755.pdf

    PDF | E-Thesis – open access

    Copyright: The Author, Sophie Sadler, 2024 CC BY - Distributed under the terms of a Creative Commons Attribution 4.0 License (CC BY 4.0).

    Download (12.86MB)

DOI (Published version): 10.23889/SUThesis.67755

Abstract

The complexity of deep learning models has motivated the development of explainability approaches within the field of artificial intelligence. However, there are several adjacent fieldsto deep learning where similarly complex models are used to make decisions, which can alsobenefit from improved int...

Full description

Published: Swansea University, Wales, UK 2024
Institution: Swansea University
Degree level: Doctoral
Degree name: Ph.D
Supervisor: Archambault, D
URI: https://cronfa.swan.ac.uk/Record/cronfa67755
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract: The complexity of deep learning models has motivated the development of explainability approaches within the field of artificial intelligence. However, there are several adjacent fieldsto deep learning where similarly complex models are used to make decisions, which can alsobenefit from improved interpretability. In this thesis, we therefore focus on the application ofexisting explainability approaches, including the use of visualisation, to problems outside thetraditional scope of deep learning. In particular, our focus is on the fields of social networkanalysis and optimisation. In addition to the use of explainability approaches, we also explorehow uncertainty quantification can be used to improve the trustworthiness of decision-makingwithin social network applications.In the first two chapters of this thesis, we propose a methodology to apply feature importance scoring to the community detection problem in network analysis, where common approaches typically provide outputs with little explanation. We propose a longlist of features on several levels (individual nodes, pairs of nodes, and sets of nodes) which we believe are interpretable to network analysis experts, and explore which of these can be used to understand the outputs of the algorithms.We then apply existing uncertainty quantification approaches to a new prediction problemwhich arises in large online social networks, where we analyse how these approaches performin the face of the unusual data distributions that we see in this setting. In particular, we areinterested in the engagement that online content receives.Finally, we propose a novel visualisation approach to aid understanding in fitness landscape analysis. We perform dimensionality reduction on the locations of points in the landscape, including the optima, before representing these with a network structure which encodes additional information about the landscape. This chapter focuses on optimisation as another domain beyond network analysis which can benefit from explainability.
Item Description: A selection of content is redacted or is partially redacted from this thesis to protect sensitive and personal information.
Keywords: explainability, social network analysis, optimisation, uncertainty quantification, machine learning
College: Faculty of Science and Engineering
Funders: UKRI AIMLAC CDT