Journal article 272 views 69 downloads
Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards
BMJ Health & Care Informatics, Volume: 30, Issue: 1, Start page: e100830
Swansea University Authors: RICHARD ROBERTS, Stephen Ali, Hayley Hutchings , Thomas Dobbs, Iain Whitaker
-
PDF | Version of Record
© Author(s) (or their employer(s)) 2023. Published by BMJ. Distributed under the terms of a Creative Commons Attribution 4.0 License (CC BY 4.0).
Download (1.08MB)
DOI (Published version): 10.1136/bmjhci-2023-100830
Abstract
Introduction: Amid clinicians’ challenges in staying updated with medical research, artificial intelligence (AI) tools like the large language model (LLM) ChatGPT could automate appraisal of research quality, saving time and reducing bias. This study compares the proficiency of ChatGPT3 against huma...
Published in: | BMJ Health & Care Informatics |
---|---|
ISSN: | 2632-1009 |
Published: |
BMJ
2023
|
Online Access: |
Check full text
|
URI: | https://cronfa.swan.ac.uk/Record/cronfa64605 |
first_indexed |
2023-10-17T08:44:52Z |
---|---|
last_indexed |
2024-11-25T14:14:20Z |
id |
cronfa64605 |
recordtype |
SURis |
fullrecord |
<?xml version="1.0"?><rfc1807><datestamp>2024-09-19T15:56:47.4632553</datestamp><bib-version>v2</bib-version><id>64605</id><entry>2023-09-22</entry><title>Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards</title><swanseaauthors><author><sid>a633f3dabcf58b6e8dc523cc40ec6992</sid><firstname>RICHARD</firstname><surname>ROBERTS</surname><name>RICHARD ROBERTS</name><active>true</active><ethesisStudent>false</ethesisStudent></author><author><sid>8c210736c07c6aa2514e0f6b3cfd9764</sid><firstname>Stephen</firstname><surname>Ali</surname><name>Stephen Ali</name><active>true</active><ethesisStudent>false</ethesisStudent></author><author><sid>bdf5d5f154d339dd92bb25884b7c3652</sid><ORCID>0000-0003-4155-1741</ORCID><firstname>Hayley</firstname><surname>Hutchings</surname><name>Hayley Hutchings</name><active>true</active><ethesisStudent>false</ethesisStudent></author><author><sid>d18101ae0b4e72051f735ef68f45e1a8</sid><firstname>Thomas</firstname><surname>Dobbs</surname><name>Thomas Dobbs</name><active>true</active><ethesisStudent>false</ethesisStudent></author><author><sid>830074c59291938a55b480dcbee4697e</sid><ORCID/><firstname>Iain</firstname><surname>Whitaker</surname><name>Iain Whitaker</name><active>true</active><ethesisStudent>false</ethesisStudent></author></swanseaauthors><date>2023-09-22</date><abstract>Introduction: Amid clinicians’ challenges in staying updated with medical research, artificial intelligence (AI) tools like the large language model (LLM) ChatGPT could automate appraisal of research quality, saving time and reducing bias. This study compares the proficiency of ChatGPT3 against human evaluation in scoring abstracts to determine its potential as a tool for evidence synthesis. Methods: We compared ChatGPT’s scoring of implant dentistry abstracts with human evaluators using the Consolidated Standards of Reporting Trials for Abstracts reporting standards checklist, yielding an overall compliance score (OCS). Bland-Altman analysis assessed agreement between human and AI-generated OCS percentages. Additional error analysis included mean difference of OCS subscores, Welch’s t-test and Pearson’s correlation coefficient. Results: Bland-Altman analysis showed a mean difference of 4.92% (95% CI 0.62%, 0.37%) in OCS between human evaluation and ChatGPT. Error analysis displayed small mean differences in most domains, with the highest in ‘conclusion’ (0.764 (95% CI 0.186, 0.280)) and the lowest in ‘blinding’ (0.034 (95% CI 0.818, 0.895)). The strongest correlations between were in ‘harms’ (r=0.32, p<0.001) and ‘trial registration’ (r=0.34, p=0.002), whereas the weakest were in ‘intervention’ (r=0.02, p<0.001) and ‘objective’ (r=0.06, p<0.001). Conclusion: LLMs like ChatGPT can help automate appraisal of medical literature, aiding in the identification of accurately reported research. Possible applications of ChatGPT include integration within medical databases for abstract evaluation. Current limitations include the token limit, restricting its usage to abstracts. As AI technology advances, future versions like GPT4 could offer more reliable, comprehensive evaluations, enhancing the identification of high-quality research and potentially improving patient outcomes.</abstract><type>Journal Article</type><journal>BMJ Health &amp; Care Informatics</journal><volume>30</volume><journalNumber>1</journalNumber><paginationStart>e100830</paginationStart><paginationEnd/><publisher>BMJ</publisher><placeOfPublication/><isbnPrint/><isbnElectronic/><issnPrint/><issnElectronic>2632-1009</issnElectronic><keywords/><publishedDay>12</publishedDay><publishedMonth>10</publishedMonth><publishedYear>2023</publishedYear><publishedDate>2023-10-12</publishedDate><doi>10.1136/bmjhci-2023-100830</doi><url/><notes/><college>COLLEGE NANME</college><CollegeCode>COLLEGE CODE</CollegeCode><institution>Swansea University</institution><apcterm>SU College/Department paid the OA fee</apcterm><funders>The research conducted herein was funded by Swansea University. SRA and TDD are funded by the Welsh Clinical Academic Training Fellowship (no award number). SRA received a Paton Masser grant from the British Association of Plastic, Reconstructive and Aesthetic Surgeons to support this work (no award number). ISW is the surgical specialty lead for Health and Care Research Wales and the chief investigator for the Scar Free Foundation & Health and Care Research Wales Programme of Reconstructive and Regenerative Surgery Research (no award number). The Scar Free Foundation is the only medical research charity focused on scarring with the mission to achieve scar-free healing within a generation. ISW is an associate editor for the Annals of Plastic Surgery, editorial board member of BMC Medicine and takes numerous other editorial board roles.</funders><projectreference/><lastEdited>2024-09-19T15:56:47.4632553</lastEdited><Created>2023-09-22T16:34:14.2135281</Created><path><level id="1">Faculty of Medicine, Health and Life Sciences</level><level id="2">Swansea University Medical School - Health Data Science</level></path><authors><author><firstname>RICHARD</firstname><surname>ROBERTS</surname><order>1</order></author><author><firstname>Stephen</firstname><surname>Ali</surname><order>2</order></author><author><firstname>Hayley</firstname><surname>Hutchings</surname><orcid>0000-0003-4155-1741</orcid><order>3</order></author><author><firstname>Thomas</firstname><surname>Dobbs</surname><order>4</order></author><author><firstname>Iain</firstname><surname>Whitaker</surname><orcid/><order>5</order></author></authors><documents><document><filename>64605__28829__e7683bd9613b410581e2e96a863b22e8.pdf</filename><originalFilename>64605.VOR.pdf</originalFilename><uploaded>2023-10-19T10:51:29.2791263</uploaded><type>Output</type><contentLength>1133961</contentLength><contentType>application/pdf</contentType><version>Version of Record</version><cronfaStatus>true</cronfaStatus><documentNotes>© Author(s) (or their employer(s)) 2023. Published by BMJ. Distributed under the terms of a Creative Commons Attribution 4.0 License (CC BY 4.0).</documentNotes><copyrightCorrect>true</copyrightCorrect><language>eng</language><licence>https://creativecommons.org/licenses/by/4.0/</licence></document></documents><OutputDurs/></rfc1807> |
spelling |
2024-09-19T15:56:47.4632553 v2 64605 2023-09-22 Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards a633f3dabcf58b6e8dc523cc40ec6992 RICHARD ROBERTS RICHARD ROBERTS true false 8c210736c07c6aa2514e0f6b3cfd9764 Stephen Ali Stephen Ali true false bdf5d5f154d339dd92bb25884b7c3652 0000-0003-4155-1741 Hayley Hutchings Hayley Hutchings true false d18101ae0b4e72051f735ef68f45e1a8 Thomas Dobbs Thomas Dobbs true false 830074c59291938a55b480dcbee4697e Iain Whitaker Iain Whitaker true false 2023-09-22 Introduction: Amid clinicians’ challenges in staying updated with medical research, artificial intelligence (AI) tools like the large language model (LLM) ChatGPT could automate appraisal of research quality, saving time and reducing bias. This study compares the proficiency of ChatGPT3 against human evaluation in scoring abstracts to determine its potential as a tool for evidence synthesis. Methods: We compared ChatGPT’s scoring of implant dentistry abstracts with human evaluators using the Consolidated Standards of Reporting Trials for Abstracts reporting standards checklist, yielding an overall compliance score (OCS). Bland-Altman analysis assessed agreement between human and AI-generated OCS percentages. Additional error analysis included mean difference of OCS subscores, Welch’s t-test and Pearson’s correlation coefficient. Results: Bland-Altman analysis showed a mean difference of 4.92% (95% CI 0.62%, 0.37%) in OCS between human evaluation and ChatGPT. Error analysis displayed small mean differences in most domains, with the highest in ‘conclusion’ (0.764 (95% CI 0.186, 0.280)) and the lowest in ‘blinding’ (0.034 (95% CI 0.818, 0.895)). The strongest correlations between were in ‘harms’ (r=0.32, p<0.001) and ‘trial registration’ (r=0.34, p=0.002), whereas the weakest were in ‘intervention’ (r=0.02, p<0.001) and ‘objective’ (r=0.06, p<0.001). Conclusion: LLMs like ChatGPT can help automate appraisal of medical literature, aiding in the identification of accurately reported research. Possible applications of ChatGPT include integration within medical databases for abstract evaluation. Current limitations include the token limit, restricting its usage to abstracts. As AI technology advances, future versions like GPT4 could offer more reliable, comprehensive evaluations, enhancing the identification of high-quality research and potentially improving patient outcomes. Journal Article BMJ Health & Care Informatics 30 1 e100830 BMJ 2632-1009 12 10 2023 2023-10-12 10.1136/bmjhci-2023-100830 COLLEGE NANME COLLEGE CODE Swansea University SU College/Department paid the OA fee The research conducted herein was funded by Swansea University. SRA and TDD are funded by the Welsh Clinical Academic Training Fellowship (no award number). SRA received a Paton Masser grant from the British Association of Plastic, Reconstructive and Aesthetic Surgeons to support this work (no award number). ISW is the surgical specialty lead for Health and Care Research Wales and the chief investigator for the Scar Free Foundation & Health and Care Research Wales Programme of Reconstructive and Regenerative Surgery Research (no award number). The Scar Free Foundation is the only medical research charity focused on scarring with the mission to achieve scar-free healing within a generation. ISW is an associate editor for the Annals of Plastic Surgery, editorial board member of BMC Medicine and takes numerous other editorial board roles. 2024-09-19T15:56:47.4632553 2023-09-22T16:34:14.2135281 Faculty of Medicine, Health and Life Sciences Swansea University Medical School - Health Data Science RICHARD ROBERTS 1 Stephen Ali 2 Hayley Hutchings 0000-0003-4155-1741 3 Thomas Dobbs 4 Iain Whitaker 5 64605__28829__e7683bd9613b410581e2e96a863b22e8.pdf 64605.VOR.pdf 2023-10-19T10:51:29.2791263 Output 1133961 application/pdf Version of Record true © Author(s) (or their employer(s)) 2023. Published by BMJ. Distributed under the terms of a Creative Commons Attribution 4.0 License (CC BY 4.0). true eng https://creativecommons.org/licenses/by/4.0/ |
title |
Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards |
spellingShingle |
Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards RICHARD ROBERTS Stephen Ali Hayley Hutchings Thomas Dobbs Iain Whitaker |
title_short |
Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards |
title_full |
Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards |
title_fullStr |
Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards |
title_full_unstemmed |
Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards |
title_sort |
Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards |
author_id_str_mv |
a633f3dabcf58b6e8dc523cc40ec6992 8c210736c07c6aa2514e0f6b3cfd9764 bdf5d5f154d339dd92bb25884b7c3652 d18101ae0b4e72051f735ef68f45e1a8 830074c59291938a55b480dcbee4697e |
author_id_fullname_str_mv |
a633f3dabcf58b6e8dc523cc40ec6992_***_RICHARD ROBERTS 8c210736c07c6aa2514e0f6b3cfd9764_***_Stephen Ali bdf5d5f154d339dd92bb25884b7c3652_***_Hayley Hutchings d18101ae0b4e72051f735ef68f45e1a8_***_Thomas Dobbs 830074c59291938a55b480dcbee4697e_***_Iain Whitaker |
author |
RICHARD ROBERTS Stephen Ali Hayley Hutchings Thomas Dobbs Iain Whitaker |
author2 |
RICHARD ROBERTS Stephen Ali Hayley Hutchings Thomas Dobbs Iain Whitaker |
format |
Journal article |
container_title |
BMJ Health & Care Informatics |
container_volume |
30 |
container_issue |
1 |
container_start_page |
e100830 |
publishDate |
2023 |
institution |
Swansea University |
issn |
2632-1009 |
doi_str_mv |
10.1136/bmjhci-2023-100830 |
publisher |
BMJ |
college_str |
Faculty of Medicine, Health and Life Sciences |
hierarchytype |
|
hierarchy_top_id |
facultyofmedicinehealthandlifesciences |
hierarchy_top_title |
Faculty of Medicine, Health and Life Sciences |
hierarchy_parent_id |
facultyofmedicinehealthandlifesciences |
hierarchy_parent_title |
Faculty of Medicine, Health and Life Sciences |
department_str |
Swansea University Medical School - Health Data Science{{{_:::_}}}Faculty of Medicine, Health and Life Sciences{{{_:::_}}}Swansea University Medical School - Health Data Science |
document_store_str |
1 |
active_str |
0 |
description |
Introduction: Amid clinicians’ challenges in staying updated with medical research, artificial intelligence (AI) tools like the large language model (LLM) ChatGPT could automate appraisal of research quality, saving time and reducing bias. This study compares the proficiency of ChatGPT3 against human evaluation in scoring abstracts to determine its potential as a tool for evidence synthesis. Methods: We compared ChatGPT’s scoring of implant dentistry abstracts with human evaluators using the Consolidated Standards of Reporting Trials for Abstracts reporting standards checklist, yielding an overall compliance score (OCS). Bland-Altman analysis assessed agreement between human and AI-generated OCS percentages. Additional error analysis included mean difference of OCS subscores, Welch’s t-test and Pearson’s correlation coefficient. Results: Bland-Altman analysis showed a mean difference of 4.92% (95% CI 0.62%, 0.37%) in OCS between human evaluation and ChatGPT. Error analysis displayed small mean differences in most domains, with the highest in ‘conclusion’ (0.764 (95% CI 0.186, 0.280)) and the lowest in ‘blinding’ (0.034 (95% CI 0.818, 0.895)). The strongest correlations between were in ‘harms’ (r=0.32, p<0.001) and ‘trial registration’ (r=0.34, p=0.002), whereas the weakest were in ‘intervention’ (r=0.02, p<0.001) and ‘objective’ (r=0.06, p<0.001). Conclusion: LLMs like ChatGPT can help automate appraisal of medical literature, aiding in the identification of accurately reported research. Possible applications of ChatGPT include integration within medical databases for abstract evaluation. Current limitations include the token limit, restricting its usage to abstracts. As AI technology advances, future versions like GPT4 could offer more reliable, comprehensive evaluations, enhancing the identification of high-quality research and potentially improving patient outcomes. |
published_date |
2023-10-12T08:28:52Z |
_version_ |
1821937009567989760 |
score |
11.048085 |