No Cover Image

Journal article 522 views 53 downloads

Questioning Racial and Gender Bias in AI-based Recommendations: Do Espoused National Cultural Values Matter?

Manjul Gupta, Carlos M. Parra, Denis Dennehy Orcid Logo

Information Systems Frontiers

Swansea University Author: Denis Dennehy Orcid Logo

  • 59527.pdf

    PDF | Version of Record

    Copyright: The Author(s) 2021. This article is licensed under a Creative Commons Attribution 4.0 International License

    Download (707.92KB)

Abstract

One realm of AI, recommender systems have attracted significant research attention due to concerns about its devastating effects to society’s most vulnerable and marginalised communities. Both media press and academic literature provide compelling evidence that AI-based recommendations help to perpe...

Full description

Published in: Information Systems Frontiers
ISSN: 1387-3326 1572-9419
Published: Springer Science and Business Media LLC 2021
Online Access: Check full text

URI: https://cronfa.swan.ac.uk/Record/cronfa59527
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract: One realm of AI, recommender systems have attracted significant research attention due to concerns about its devastating effects to society’s most vulnerable and marginalised communities. Both media press and academic literature provide compelling evidence that AI-based recommendations help to perpetuate and exacerbate racial and gender biases. Yet, there is limited knowledge about the extent to which individuals might question AI-based recommendations when perceived as biased. To address this gap in knowledge, we investigate the effects of espoused national cultural values on AI questionability, by examining how individuals might question AI-based recommendations due to perceived racial or gender bias. Data collected from 387 survey respondents in the United States indicate that individuals with espoused national cultural values associated to collectivism, masculinity and uncertainty avoidance are more likely to question biased AI-based recommendations. This study advances understanding of how cultural values affect AI questionability due to perceived bias and it contributes to current academic discourse about the need to hold AI accountable.
Keywords: Artificial intelligence; Recommender systems; Culture; Racial bias; Gender bias; Responsible AI; Algorithmic bias; Ethical AI
College: Faculty of Humanities and Social Sciences
Funders: The IReL Consortium