No Cover Image

E-Thesis 70 views

Dimensions of Algorithmic Bias: Exploring its Influence on Organisational Decision-making Processes in Saudi Arabia / RAZAN ALOWAYFI

Swansea University Author: RAZAN ALOWAYFI

  • E-Thesis under embargo until: 31st December 2030

DOI (Published version): 10.23889/SUThesis.71643

Abstract

Algorithms are now central to organisational decision-making, influencing how platforms are organised and resources allocated based on complex data. Their opaque nature, however, often hides decision-making processes, raising concerns about fairness and potential discrimination. Against this backgro...

Full description

Published: Swansea 2026
Institution: Swansea University
Degree level: Doctoral
Degree name: Ph.D
Supervisor: Dennehy, D., Dwivedi, Y. K., and Cotterell, D.
URI: https://cronfa.swan.ac.uk/Record/cronfa71643
first_indexed 2026-03-19T12:51:49Z
last_indexed 2026-03-20T05:32:07Z
id cronfa71643
recordtype RisThesis
fullrecord <?xml version="1.0"?><rfc1807><datestamp>2026-03-19T13:01:35.1470231</datestamp><bib-version>v2</bib-version><id>71643</id><entry>2026-03-19</entry><title>Dimensions of Algorithmic Bias: Exploring its Influence on Organisational Decision-making Processes in Saudi Arabia</title><swanseaauthors><author><sid>7f27d9b48cd6f2642295c38357efe338</sid><firstname>RAZAN</firstname><surname>ALOWAYFI</surname><name>RAZAN ALOWAYFI</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2026-03-19</date><abstract>Algorithms are now central to organisational decision-making, influencing how platforms are organised and resources allocated based on complex data. Their opaque nature, however, often hides decision-making processes, raising concerns about fairness and potential discrimination. Against this background, this doctoral study examines how algorithmic bias emerges, is experienced and is managed within Saudi organisations during a period of rapid digital transformation shaped by Vision 2030. The study investigates the socio-technical, organisational, ethical and cultural conditions that influence fairness, transparency and accountability in algorithmic decision-making. An interpretivist, inductive qualitative design was adopted, using a multiple case study strategy and drawing on semi-structured interviews with thirty-five practitioners across public, private and semi-government sectors. Data from interviews, observations and documentation were analysed using thematic analysis and the Gioia methodology to produce a structured interpretation of practitioner experiences. The findings identify four interrelated dimensions of algorithmic bias:algorithmic pollution arising from poor contextual alignment and data quality; fairness concerns linked to opacity and uneven oversight; racial and gender bias embedded in training data and organisational processes; and broader organisational and societal implications affecting legitimacy and trust. These issues are shaped by local linguistic and cultural factors, uneven governance maturity, manual bias detection practices and reliance on imported technological systems. Alongside these challenges, the study shows that algorithmic systems can provide operational benefits when supported by strong governance and human oversight. The thesis contributes a contextually grounded framework for responsible algorithmic governance in Saudi Arabia, demonstrating that algorithmic bias is not solely a technical failure, but a systemic issue shaped by organisational routines, institutional expectations and societal context. The study offers practical recommendations for leaders, regulators and technology developers seeking to strengthen fairness, transparency and accountability in AI systems and provides a foundation for future research on responsible AI in rapidly developing digital economies.</abstract><type>E-Thesis</type><journal/><volume/><journalNumber/><paginationStart/><paginationEnd/><publisher/><placeOfPublication>Swansea</placeOfPublication><isbnPrint/><isbnElectronic/><issnPrint/><issnElectronic/><keywords>Algorithmic Bias; Organisational Decision-Making; Algorithmic Governance; Socio-Technical Systems; Saudi Vision 2030</keywords><publishedDay>17</publishedDay><publishedMonth>3</publishedMonth><publishedYear>2026</publishedYear><publishedDate>2026-03-17</publishedDate><doi>10.23889/SUThesis.71643</doi><url/><notes/><college>COLLEGE NANME</college><CollegeCode>COLLEGE CODE</CollegeCode><institution>Swansea University</institution><supervisor>Dennehy, D., Dwivedi, Y. K., and Cotterell, D.</supervisor><degreelevel>Doctoral</degreelevel><degreename>Ph.D</degreename><degreesponsorsfunders>Imam Mohammad Ibn Saud Islamic University</degreesponsorsfunders><apcterm/><funders>Imam Mohammad Ibn Saud Islamic University.</funders><projectreference/><lastEdited>2026-03-19T13:01:35.1470231</lastEdited><Created>2026-03-19T12:42:12.2402431</Created><path><level id="1">Faculty of Humanities and Social Sciences</level><level id="2">School of Management - Business Management</level></path><authors><author><firstname>RAZAN</firstname><surname>ALOWAYFI</surname><order>1</order></author></authors><documents><document><filename>Under embargo</filename><originalFilename>Under embargo</originalFilename><uploaded>2026-03-19T12:51:06.9116700</uploaded><type>Output</type><contentLength>2329200</contentLength><contentType>application/pdf</contentType><version>E-Thesis</version><cronfaStatus>true</cronfaStatus><embargoDate>2030-12-31T00:00:00.0000000</embargoDate><documentNotes>Copyright: the author, Razan Saud M Alowayfi, 2026</documentNotes><copyrightCorrect>true</copyrightCorrect><language>eng</language></document></documents><OutputDurs/></rfc1807>
spelling 2026-03-19T13:01:35.1470231 v2 71643 2026-03-19 Dimensions of Algorithmic Bias: Exploring its Influence on Organisational Decision-making Processes in Saudi Arabia 7f27d9b48cd6f2642295c38357efe338 RAZAN ALOWAYFI RAZAN ALOWAYFI true false 2026-03-19 Algorithms are now central to organisational decision-making, influencing how platforms are organised and resources allocated based on complex data. Their opaque nature, however, often hides decision-making processes, raising concerns about fairness and potential discrimination. Against this background, this doctoral study examines how algorithmic bias emerges, is experienced and is managed within Saudi organisations during a period of rapid digital transformation shaped by Vision 2030. The study investigates the socio-technical, organisational, ethical and cultural conditions that influence fairness, transparency and accountability in algorithmic decision-making. An interpretivist, inductive qualitative design was adopted, using a multiple case study strategy and drawing on semi-structured interviews with thirty-five practitioners across public, private and semi-government sectors. Data from interviews, observations and documentation were analysed using thematic analysis and the Gioia methodology to produce a structured interpretation of practitioner experiences. The findings identify four interrelated dimensions of algorithmic bias:algorithmic pollution arising from poor contextual alignment and data quality; fairness concerns linked to opacity and uneven oversight; racial and gender bias embedded in training data and organisational processes; and broader organisational and societal implications affecting legitimacy and trust. These issues are shaped by local linguistic and cultural factors, uneven governance maturity, manual bias detection practices and reliance on imported technological systems. Alongside these challenges, the study shows that algorithmic systems can provide operational benefits when supported by strong governance and human oversight. The thesis contributes a contextually grounded framework for responsible algorithmic governance in Saudi Arabia, demonstrating that algorithmic bias is not solely a technical failure, but a systemic issue shaped by organisational routines, institutional expectations and societal context. The study offers practical recommendations for leaders, regulators and technology developers seeking to strengthen fairness, transparency and accountability in AI systems and provides a foundation for future research on responsible AI in rapidly developing digital economies. E-Thesis Swansea Algorithmic Bias; Organisational Decision-Making; Algorithmic Governance; Socio-Technical Systems; Saudi Vision 2030 17 3 2026 2026-03-17 10.23889/SUThesis.71643 COLLEGE NANME COLLEGE CODE Swansea University Dennehy, D., Dwivedi, Y. K., and Cotterell, D. Doctoral Ph.D Imam Mohammad Ibn Saud Islamic University Imam Mohammad Ibn Saud Islamic University. 2026-03-19T13:01:35.1470231 2026-03-19T12:42:12.2402431 Faculty of Humanities and Social Sciences School of Management - Business Management RAZAN ALOWAYFI 1 Under embargo Under embargo 2026-03-19T12:51:06.9116700 Output 2329200 application/pdf E-Thesis true 2030-12-31T00:00:00.0000000 Copyright: the author, Razan Saud M Alowayfi, 2026 true eng
title Dimensions of Algorithmic Bias: Exploring its Influence on Organisational Decision-making Processes in Saudi Arabia
spellingShingle Dimensions of Algorithmic Bias: Exploring its Influence on Organisational Decision-making Processes in Saudi Arabia
RAZAN ALOWAYFI
title_short Dimensions of Algorithmic Bias: Exploring its Influence on Organisational Decision-making Processes in Saudi Arabia
title_full Dimensions of Algorithmic Bias: Exploring its Influence on Organisational Decision-making Processes in Saudi Arabia
title_fullStr Dimensions of Algorithmic Bias: Exploring its Influence on Organisational Decision-making Processes in Saudi Arabia
title_full_unstemmed Dimensions of Algorithmic Bias: Exploring its Influence on Organisational Decision-making Processes in Saudi Arabia
title_sort Dimensions of Algorithmic Bias: Exploring its Influence on Organisational Decision-making Processes in Saudi Arabia
author_id_str_mv 7f27d9b48cd6f2642295c38357efe338
author_id_fullname_str_mv 7f27d9b48cd6f2642295c38357efe338_***_RAZAN ALOWAYFI
author RAZAN ALOWAYFI
author2 RAZAN ALOWAYFI
format E-Thesis
publishDate 2026
institution Swansea University
doi_str_mv 10.23889/SUThesis.71643
college_str Faculty of Humanities and Social Sciences
hierarchytype
hierarchy_top_id facultyofhumanitiesandsocialsciences
hierarchy_top_title Faculty of Humanities and Social Sciences
hierarchy_parent_id facultyofhumanitiesandsocialsciences
hierarchy_parent_title Faculty of Humanities and Social Sciences
department_str School of Management - Business Management{{{_:::_}}}Faculty of Humanities and Social Sciences{{{_:::_}}}School of Management - Business Management
document_store_str 0
active_str 0
description Algorithms are now central to organisational decision-making, influencing how platforms are organised and resources allocated based on complex data. Their opaque nature, however, often hides decision-making processes, raising concerns about fairness and potential discrimination. Against this background, this doctoral study examines how algorithmic bias emerges, is experienced and is managed within Saudi organisations during a period of rapid digital transformation shaped by Vision 2030. The study investigates the socio-technical, organisational, ethical and cultural conditions that influence fairness, transparency and accountability in algorithmic decision-making. An interpretivist, inductive qualitative design was adopted, using a multiple case study strategy and drawing on semi-structured interviews with thirty-five practitioners across public, private and semi-government sectors. Data from interviews, observations and documentation were analysed using thematic analysis and the Gioia methodology to produce a structured interpretation of practitioner experiences. The findings identify four interrelated dimensions of algorithmic bias:algorithmic pollution arising from poor contextual alignment and data quality; fairness concerns linked to opacity and uneven oversight; racial and gender bias embedded in training data and organisational processes; and broader organisational and societal implications affecting legitimacy and trust. These issues are shaped by local linguistic and cultural factors, uneven governance maturity, manual bias detection practices and reliance on imported technological systems. Alongside these challenges, the study shows that algorithmic systems can provide operational benefits when supported by strong governance and human oversight. The thesis contributes a contextually grounded framework for responsible algorithmic governance in Saudi Arabia, demonstrating that algorithmic bias is not solely a technical failure, but a systemic issue shaped by organisational routines, institutional expectations and societal context. The study offers practical recommendations for leaders, regulators and technology developers seeking to strengthen fairness, transparency and accountability in AI systems and provides a foundation for future research on responsible AI in rapidly developing digital economies.
published_date 2026-03-17T05:34:30Z
_version_ 1860792138390831104
score 11.100184