No Cover Image

Journal article 549 views 807 downloads

The role of institutional and self in the formation of trust in artificial intelligence technologies

Lai-Wan Wong Orcid Logo, Garry Wei-Han Tan Orcid Logo, Keng-Boon Ooi Orcid Logo, Yogesh Dwivedi Orcid Logo

Internet Research

Swansea University Author: Yogesh Dwivedi Orcid Logo

  • FinalManuscript.pdf

    PDF | Accepted Manuscript

    Copyright © 2023, Emerald Publishing Limited. Distributed under the terms of a Creative Commons Attribution Non Commercial 4.0 License (CC BY-NC 4.0).

    Download (508.24KB)

Abstract

Purpose: The deployment of artificial intelligence (AI) technologies in travel and tourism has received much attention in the wake of the pandemic. While societal adoption of AI has accelerated, it also raises some trust challenges. Literature on trust in AI is scant, especially regarding the vulner...

Full description

Published in: Internet Research
ISSN: 1066-2243
Published: Emerald 2023
Online Access: Check full text

URI: https://cronfa.swan.ac.uk/Record/cronfa62227
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract: Purpose: The deployment of artificial intelligence (AI) technologies in travel and tourism has received much attention in the wake of the pandemic. While societal adoption of AI has accelerated, it also raises some trust challenges. Literature on trust in AI is scant, especially regarding the vulnerabilities faced by different stakeholders to inform policy and practice. This work proposes a framework to understand the use of AI technologies from the perspectives of institutional and the self to understand the formation of trust in the mandated use of AI-based technologies in travelers. Design/methodology/approach: An empirical investigation using partial least squares-structural equation modeling was employed on responses from 209 users. This paper considered factors related tothe self (perceptions of self-threat, privacy empowerment, trust propensity) and institution (regulatory protection, corporate privacy responsibility) to understand the formation of trust in AI use for travelers. Findings: Results showed that self-threat, trust propensity and regulatory protection influence trust in users on AI use. Privacy empowerment and corporate responsibility do not. Originality/value: Insights from the past studies on AI in travel and tourism are limited. This study advances current literature on affordance and reactance theories to provide a better understanding of what makes travelers trust the mandated use of AI technologies. This work also demonstrates the paradoxicaleffects of self and institution on technologies and their relationship to trust. For practice, this study offers insights for enhancing adoption via developing trust.
Keywords: Artificial intelligence, Trust, Self-threat, Corporate privacy responsibility, Regulatory protection
College: Faculty of Humanities and Social Sciences