A recent JADA study tested ChatGPT’s dental knowledge to see if it could be a useful study tool for students.
Researchers tested the dental knowledge of ChatGPT to see if the artificial intelligence chatbot was a reliable resource for dental information in a recent study published in the Journal of the American Dental Association (JADA). The paper was co-authored by Arman Danesh, Hirad Pazouki, Kasra Danesh, BSc, Farzad Danesh, DDS, MSc, and Arsalan Danesh, DDS.
The study was designed to evaluate the performance of ChatGPT on a board-style multiple-choice dental knowledge assessment to gauge its capacity to output accurate dental content and in turn the risk of misinformation associated with use of the chatbot as an educational resource by dental students.
Researchers in the JADA study tested ChatGPT3.5 and the most recent version ChatGPT4 on a series of text only questions to gauge the reliability of its information output.
ChatGPT3.5 and ChatGPT4 were asked questions obtained from 3 different sources: INBDE Bootcamp, ITDOnline, and a list of board-style questions provided by the Joint Commission on National Dental Examinations. Image-based questions were excluded, as ChatGPT only takes text-based inputs. The mean performance across 3 trials was reported for each model.
ChatGPT3.5 and ChatGPT4 answered 61.3% and 76.9% of the questions correctly on average, respectively.
Researchers concluded that ChatGPT3.5 did not perform sufficiently well on the board-style knowledge assessment. ChatGPT4, however, displayed a competent ability to output accurate dental content. As for now, the results suggest that students should not use ChatGPT as a primary learning resource, but it could be used as a supplement for some study.
The authors suggested that future research should evaluate the proficiency of emerging models of ChatGPT in dentistry to assess its evolving role in dental education.