Assessing question characteristic influences on ChatGPT's performance and response-explanation consistency: Insights from Taiwan's Nursing Licensing Exam

Mei Chin Su, Li En Lin, Li Hwa Lin, Yu Chun Chen*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

Background: Investigates the integration of an artificial intelligence tool, specifically ChatGPT, in nursing education, addressing its effectiveness in exam preparation and self-assessment. Objective: This study aims to evaluate the performance of ChatGPT, one of the most promising artificial intelligence-driven linguistic understanding tools in answering question banks for nursing licensing examination preparation. It further analyzes question characteristics that might impact the accuracy of ChatGPT-generated answers and examines its reliability through human expert reviews. Design: Cross-sectional survey comparing ChatGPT-generated answers and their explanations. Setting: 400 questions from Taiwan's 2022 Nursing Licensing Exam. Methods: The study analyzed 400 questions from five distinct subjects of Taiwan's 2022 Nursing Licensing Exam using the ChatGPT model which provided answers and in-depth explanations for each question. The impact of various question characteristics, such as type and cognitive level, on the accuracy of the ChatGPT-generated responses was assessed using logistic regression analysis. Additionally, human experts evaluated the explanations for each question, comparing them with the ChatGPT-generated answers to determine consistency. Results: ChatGPT exhibited overall accuracy at 80.75 % for Taiwan's National Nursing Exam, which passes the exam. The accuracy of ChatGPT-generated answers diverged significantly across test subjects, demonstrating a hierarchy ranging from General Medicine at 88.75 %, Medical–Surgical Nursing at 80.0 %, Psychology and Community Nursing at 70.0 %, Obstetrics and Gynecology Nursing at 67.5 %, down to Basic Nursing at 63.0 %. ChatGPT had a higher probability of eliciting incorrect responses for questions with certain characteristics, notably those with clinical vignettes [odds ratio 2.19, 95 % confidence interval 1.24–3.87, P = 0.007] and complex multiple-choice questions [odds ratio 2.37, 95 % confidence interval 1.00–5.60, P = 0.049]. Furthermore, 14.25 % of ChatGPT-generated answers were inconsistent with their explanations, leading to a reduction in the overall accuracy to 74 %. Conclusions: This study reveals the ChatGPT's capabilities and limitations in nursing exam preparation, underscoring its potential as an auxiliary educational tool. It highlights the model's varied performance across different question types and notable inconsistencies between its answers and explanations. The study contributes significantly to the understanding of artificial intelligence in learning environments, guiding the future development of more effective and reliable artificial intelligence-based educational technologies. Tweetable abstract: New study reveals ChatGPT's potential and challenges in nursing education: Achieves 80.75 % accuracy in exam prep but faces hurdles with complex questions and logical consistency.

Original languageEnglish
Article number104717
JournalInternational Journal of Nursing Studies
Volume153
DOIs
StatePublished - May 2024

Keywords

  • Accuracy
  • Artificial intelligence language understanding tools
  • ChatGPT
  • ChatGPT-generated answers
  • Clinical vignettes
  • Consistency
  • Human-verification of explanations
  • Nursing license exam
  • Question bank
  • Question cognitive level
  • Question type

Fingerprint

Dive into the research topics of 'Assessing question characteristic influences on ChatGPT's performance and response-explanation consistency: Insights from Taiwan's Nursing Licensing Exam'. Together they form a unique fingerprint.

Cite this