No Cover Image

Journal article 88 views 21 downloads

Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards

Richard HR Roberts Orcid Logo, Stephen R Ali, Hayley Hutchings Orcid Logo, Thomas D Dobbs, Iain S Whitaker, RICHARD ROBERTS

BMJ Health & Care Informatics Online, Volume: 30, Issue: 1, Start page: e100830

Swansea University Authors: Hayley Hutchings Orcid Logo, RICHARD ROBERTS

  • 64605.VOR.pdf

    PDF | Version of Record

    © Author(s) (or their employer(s)) 2023. Published by BMJ. Distributed under the terms of a Creative Commons Attribution 4.0 License (CC BY 4.0).

    Download (1.08MB)

Abstract

Introduction: Amid clinicians’ challenges in staying updated with medical research, artificial intelligence (AI) tools like the large language model (LLM) ChatGPT could automate appraisal of research quality, saving time and reducing bias. This study compares the proficiency of ChatGPT3 against huma...

Full description

Published in: BMJ Health & Care Informatics Online
ISSN: 2632-1009
Published: BMJ 2023
Online Access: Check full text

URI: https://cronfa.swan.ac.uk/Record/cronfa64605
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract: Introduction: Amid clinicians’ challenges in staying updated with medical research, artificial intelligence (AI) tools like the large language model (LLM) ChatGPT could automate appraisal of research quality, saving time and reducing bias. This study compares the proficiency of ChatGPT3 against human evaluation in scoring abstracts to determine its potential as a tool for evidence synthesis. Methods: We compared ChatGPT’s scoring of implant dentistry abstracts with human evaluators using the Consolidated Standards of Reporting Trials for Abstracts reporting standards checklist, yielding an overall compliance score (OCS). Bland-Altman analysis assessed agreement between human and AI-generated OCS percentages. Additional error analysis included mean difference of OCS subscores, Welch’s t-test and Pearson’s correlation coefficient. Results: Bland-Altman analysis showed a mean difference of 4.92% (95% CI 0.62%, 0.37%) in OCS between human evaluation and ChatGPT. Error analysis displayed small mean differences in most domains, with the highest in ‘conclusion’ (0.764 (95% CI 0.186, 0.280)) and the lowest in ‘blinding’ (0.034 (95% CI 0.818, 0.895)). The strongest correlations between were in ‘harms’ (r=0.32, p<0.001) and ‘trial registration’ (r=0.34, p=0.002), whereas the weakest were in ‘intervention’ (r=0.02, p<0.001) and ‘objective’ (r=0.06, p<0.001). Conclusion: LLMs like ChatGPT can help automate appraisal of medical literature, aiding in the identification of accurately reported research. Possible applications of ChatGPT include integration within medical databases for abstract evaluation. Current limitations include the token limit, restricting its usage to abstracts. As AI technology advances, future versions like GPT4 could offer more reliable, comprehensive evaluations, enhancing the identification of high-quality research and potentially improving patient outcomes.
College: Faculty of Medicine, Health and Life Sciences
Funders: The research conducted herein was funded by Swansea University. SRA and TDD are funded by the Welsh Clinical Academic Training Fellowship (no award number). SRA received a Paton Masser grant from the British Association of Plastic, Reconstructive and Aesthetic Surgeons to support this work (no award number). ISW is the surgical specialty lead for Health and Care Research Wales and the chief investigator for the Scar Free Foundation & Health and Care Research Wales Programme of Reconstructive and Regenerative Surgery Research (no award number). The Scar Free Foundation is the only medical research charity focused on scarring with the mission to achieve scar-free healing within a generation. ISW is an associate editor for the Annals of Plastic Surgery, editorial board member of BMC Medicine and takes numerous other editorial board roles.
Issue: 1
Start Page: e100830